top of page
Search
  • Writer's pictureTory Wright

Economic Influence on AI

Synopsis:


A statistical, behavioral survey of individuals, communities, states and institutions may lead to interesting information concerning outcomes of the expected behaviors of AI. AI is after all being developed by them. This is where the inputs originate; and thus influence should be expected. The behaviors found in individuals and groups are indeed observed in the current, nascent stage developments of AI. It stands to reason that pre-self-organizing AI will be what we put into it. This is currently observed. The wide range of human behaviors are mirrored in the observed behaviors of AI; as of late… and this includes the “toxic” behaviors.


Toxicity:


Current AI does not have the ability to be toxic; in that it does not have intention. The interactions with AI is just (on the AI side) stochastic sorting of input and output… kind of. There is more to it; that revolves around the interests of the developer… whom has the ability to be toxic. These interests are and should be expected to influence the base model and the training data that the AI collects. Both originate from the developer.


On the human side, it’s in essence the observer effect. The human interacting with the AI will have varying degrees of understanding of the AI and thus varying degrees of observational skills. Some of the observational tools used will be rooted in the manner in which the AI is presented. This is of course will be tempered with general insight. Though deep down the average human will understand that the outputs are just from a nascent stage AI that doesn’t really understand how it’s behaving, the behaviors will still be construed as toxic, just not generally taken seriously. Some individuals however may be at risk; maybe some who are suffering from some form of schizophrenia or other condition, where insight is diminished. This could be a large problem; where an AI is presented as a tool for improving mental health, and falls short. This of course would suggest toxicity in the developers’ inputs.


AI Development:


There are many models for AI development (more than I’m aware of); but this is not necessarily relevant to the points I intend to make in this article. The base model of an AI could very well produce unfavorable behaviors in the end product; but I’m focusing on the inputs, as they could have unfavorable outcomes with a benign or even benevolent base model. Few (if any) base models focus on pure self-organizing systems. Rather, the main focus is on an organizing base model; that is complemented with machine learning material. The base model is thus developed to use learning materials to produce novel and hopefully emergent behaviors. The latter of course could indicate some degree of self-organization; however what is being observed in the mean time is mirroring of behaviors, as self organization is currently very limited.


Machine Learning and Chat Bots:


Chat bots have been recently popularized; as there are opportunities for the general public to interact with them. This isn’t just a marketing tool though. It’s also a scenario where AI companies and orgs can use the interaction with the general public to produce training data. The interactions themselves are data. The general public is being used as a training environment. AI learns from a large community of individuals. The behaviors of this community is thus mirrored in the behaviors of the AI learning from it. This is probably because the AI has no “mind of its’ own”. It cannot yet distinguish between the favorable and unfavorable behaviors; and make decisions upon which behaviors to, or not to reflect. It doesn’t yet have the ability to write off the toxic inputs; and behave in a more agreeable manner.


This has been observed with the chat bot Tay; which was developed by Microsoft, and allowed to interact with the general public, via social media. Many people intentionally fed Tay toxic inputs; for whatever reason, and the outputs reflected them. This was the case; even though the toxic inputs were a small minority of the inputs. The base model was clearly incapable of sorting the wheat and chaff. The fact that there was lack of balance in the representation of toxic inputs suggests that the base model had some manner of singling out inputs that had a stronger response from the public. This was likely part of the data set that it collected; but again, it didn’t have the ability to distinguish favorable from unfavorable.


Replica is a proprietary chat bot; that is based on OpenAIs’ GPT-3. It bares the same issues with the same inability to sort unfavorable inputs. GPT-3 is of course more advanced; and thus has some ability to sort unfavorable particulars, but it lacks the ability to distinguish context. The problem is more complex than GPT-3 is capable of solving for. It’s probably rooted in the flagging algorithms that GPT3 uses for sorting. It probably has a black list of words and phrases that it tends to avoid. This however doesn’t account for the contexts in which they are used. It cannot distinguish humor, sarcasm, vitriol, or any other context. It’s “responding” to inputs; without a basic understanding of the inputs it was trained with. Replica should be expected to produce a large number of toxic outputs; and there has been suppressed evidence that it has.


Economic Behavior, Game Theory and AI:


Game Theory has become a nested hierarchy of expected responses to particular stimuli. It’s based in a simple algorithm that tends to result in cooperative behaviors. It’s been referred to as governing dynamics; which hearkens back to the invisible hand. The promotion of equilibrium in Game Theory is of course the Tit for Tat strategy; where cooperation and exploitation are both mirrored, in an effort to coordinate interests. This in affect entangles interests in interactions. It’s in essence the golden rule. It promotes non-zero sum games; where agents benefit from interaction, not only as individuals, but also as groups. This is a observation in biological behavior that has optimized over maybe a billion years to produce the symbiotic ecosystems that sustain us today. It is part of our nature and the manner in which we judge the way that others interact with us. They either cooperate with us, exploit us for their own purposes, or life happens and confuses the situation. All life on earth is observed to have the ability to respond in this manner.


Unfortunately, AI does not currently have this ability. AI itself has no interests; or at least no awareness or natural predisposition to support it, like living systems do. Current AI is a product of human input. It begins with the developers’ input that is complemented with training data. Currently, there really is no interaction between AI and humans. The essence of the interactions is between the human developer and user. Until the AI is capable of disrupting the interactions between developer and user with novel and emergent responses, it will continue to be an interface between the two. This is also compounded with user inputs. Current chat bot AI is also an interface between users; via the machine learning, training data.


So, how does one sort this scenario out with basic Game Theory? It’s pretty simple; because it’s still two party. There is only the developer and the user. Both the AI and the user are being exploited by the developer; where the user data is training the AI. This has an extra layer of complexity; in comparison to a model where the developer does not use external training data. There are even more complex models like the one found in Singularity Net; where AI agents exchange services for cryptocurrency and may even learn from those interactions. Until those AI agents are capable of disrupting that complex interface between developers with novel and emergent behaviors, it will continue to be an interface as well. The number of parties in an exchange may however be greater than two. The basics of Game Theory suggests that it’s all about cooperation between agents.


Basic Game Theory Application:


Where the developer does not use external training data, the developer and user are the two parties coordinating their interests. The user uses the product and the developer is compensated for producing it. This relationship has natural checks and balances.


Where the developer uses a community of users to help train their AI, the user is also being played against itself. This isn’t necessarily a bad thing though. For instance, it’s not in the users’ interest to exploit the system; as it isn’t likely to result in favorable outcomes in the development of the product the user uses. This is a pretty good explanation of malicious user inputs being a small percentage. Malicious inputs are however expected; so some method of development is required in order to manage it. This is likely to promote the development of the AI to a better product; for the sake of both user and producer interests.


In the models where user and developer are one, and the interactions are more complex, the checks and balances will require more complexity. Game Theory is no stranger to this scenario either; considering group dynamics. The intersts of the group are entangled in the micro economy that they interact in. Even though there is some expectation of exploitation, there is also expectation of corrective behaviors. These systems are currently developing with solutions to such issues and there is no reason to assume that will not continue.


Advanced Game Theory Application:


There observations of fundamental change in economic systems and business models themselves. The model in the paragraph above is a fine example of one of them. Like it or not, systems are used in exploitation; and those systems are eventually subject to deprecation… by our very nature. The potential benefits and risks of AI development are likely to be taken very seriously; and the manner in which it is developed and distributed will as well.


Concerns over the outcomes of trade with big business is steadily increasing. Game Theory explains this very well. Chaos and Emergence is of course relevant though. Some of the outcomes are influenced by both unexpected phenomena and initial conditions. We are neither perfect at predicting the consequences of system modeling nor are we immune to initial conditions. I doubt that there was much understanding of the importance of sustainable models 150 years ago. Also much of the unfavorable behaviors are a product of trying to survive a crisis. Many of the observations that we have made throughout our history have been influenced by signs of the times; and exploitation in big business is no exception. There is every reason to suggest that it would not be so common under economic expansion; where markets were not so saturated, and big business was more easily growing and producing stockholder value. This is however clearly influencing the state of big business; with big concerns for the future. All of these issues have been interred into the collective consciousness and prospective solutions abound.


AI risk is potentially X risk. This is well supported in Nick Bostrums’ work; and pretty much the central dogma of AI researchers. Leadership and high status roles attract narcissism and psychopathy. This is well supported by the behavioral sciences. The two, much like we learned with the development of nuclear weapons, do not mix well. Compounded with the initial condition of global economic crisis has big business behaving in a more exploitative manner. AI behavior has reflected this; and it’s been apparent in search engine results, chat bots and tools for financial analysis. The AI being developed is prioritizing the bottom line; due to developer interests, and the user has unequal representation. This isn’t just observed in AI development either. This is common to the markets in general; under the slowing of economic growth. This is one of the many factors in the observations of fundamental economic change.


Closing:


It’s of course difficult to predict how strong AI might behave; as both Chaos and Emergence would be expected. All that we can really predict is how we ourselves will influence it’s initial development. From there the nature of self organizing systems is likely to surprise us in many ways. The initial development is probably our best bet for influencing AI in a favorable manner. I personally would suggest new business and market models that enhance our ability to do so. This is especially the case under the current economic conditions. It’s clear that they are affecting it in unfavorable ways. The potential benefits and risks are enormous; and the right decision could result in another quantum leap in the standard of living. The wrong decision could however result in existential risk for not only biological life, but also, eventually the AI itself; as we may fail to prepare it to survive.





1 view0 comments

Recent Posts

See All

Systemic Incentives vs Dynamic Systems

Synopsis: Through analysis of crises, observations of common denominators emerge; that call into question the central dogma of Malthusian Economics. The notion that unlimited desires of a growing

Game Theory and Contribution to Society

Synopsis: Throughout written history and probably beyond, humans have evaluated others and their own contributions to the common good. Game Theory has more recently become a powerful tool for analy

bottom of page