The Self-Destructive Nature of the Evo-Econ
- Tory Wright

- Nov 9
- 3 min read

The novelty of human civilization may be the most interesting emergence in the entire history of evolutionary development. In my own opinion, it’s because it’s testing the bounds of what natural is. It’s an obvious inflection point between the behaviors of Type 0 lifeforms, and the more capable Type 1, that sometimes modifies natural systems, rather than completely tuning themselves to their natural rhythms. One of the more concerning aspects of it in human society is how quickly it emerged. It’s thought to have happened due to a development that was a consequence of walking upright, maybe to see over tall grasses in the savanna and plains. It’s the development of the uniquely large brain, that has allowed humans to become uniquely capable. The expedience of this emergence has left the species a bit lost though in that, it has quickly facilitated a great deal of potential without the lengthy, evolutionary optimization that guides behavior. This keeps much of that potential tied up in novelty that may or may not be naturally beneficial, that has left us with a perfect storm of risk factors. The irony of the concerns over the potential dangers of AI is that it’s likely projection; due to the manner in which we mirror aspects of ourselves in what we create.
What we refer to as evolutionary baggage is considered from our own perspective. We have the potential to do amazing things with our radical developmental advantage; but we still have the predispositions of the Hominid. This is a problem because the difference in capability between ourselves and the Hominid is much greater than that of the other great apes and the Hominid. This development happened extremely quickly by developmental standards; not to mention evolutionary standards. This did not allow for the normative process of optimization of the collective unconscious. This left the collective unconscious at a disadvantage against the increased creative power of the emergent big brain. This is a problem because the collective unconscious is still the primary driver for types of behaviors. Therefor, the novelty and entropy are running with little to no natural, normative influence. Evolution just didn’t have the time to develop it in us. Our predispositions, that are the primary influence on our behavior, have natural optimizations that are hundreds of millions of years behind. That big brain is thus, out of necessity, required to solve for that issue, with consideration of those risk factors toward solutions for them. Rather, what has happened, over and over again, is that those risk factors go unchecked, due to the natural game theoretical behaviors of distributing resources being about the only thing going, due to it not being upgraded by the hundreds of millions of years of optimizing experience, that the evolutionary process provides to it.
The obvious answer to this issue is to use that extra brain power to understand an compensate for that lack of evolutionary optimization; but from that understanding, one finds how unlikely it is, due to those depreciated evolutionary predispositions. It’s not expected that the behavior needed to solve the problem will actually do it. The math of the model points to an attractor that is in essence at least extinction and almost as likely total loss of the entire biosphere. Rather than so much concern over the specific risk factors themselves, this perfect storm model suggests that all of the risk factors have the same influence on them; and it’s in essence Evo-Devo self-destruction. Economically, it’s a problem of equilibrium. The novel behavior still has the normative gun to it’s head, but the predispositions keep creating entropy themselves. It’s like the death spiral observed in ants. It’s self-destructive impulse, that is going unchecked; because the novel capability that is the root of the problem can also be used to exacerbate it… and it is… for the natural tournament of dominance.
This is one of the solutions to the Fermi Paradox. The transition from Type 0 to Type 1 may be so dangerous that it, more often than not, results in extinction. There is currently no feasible solution to this issue. That is however no excuse, considering the constant of uncertainty, to give up.


Comments