Table of Contents
BEGINNINGS
- Primary Brain Processes
- Approach and Avoidance
- Fragmentation
- Secondary Evolution of Data Nets, as Influenced by Primary Evolution
- Informational Paralysis
- Technologically Reinforced Atavism
- Assumptions Regarding Machine Intelligence
- The Fallacy of Human Solutions
- The Fundamental Problem of a Closed System
- Planetary Balance
- Dangerous Intelligences
- The Exploitation of Nonhuman Forms
- Volitional versus Intellectual Capacity
- Manifestations of Artificial Intelligence: Use of Concept
- Technological Drift
- Technological Amorality
- Technological Exploitation
- Towards a Greater Balance
- Devolution and Deidentification
- Intellect as Controller
- Subjective Consciousness
- The Freudian Metaphor
- Balance of Understanding
BEGINNINGS
The Gaian system is being ‘overthrown’ by a species which evolved within its context. This imbalance arises because of the brain which that species has evolved. It may, therefore, be of some use to study how this brain arose, how it perceives, how it processes information and reacts to that information, and the ways in which the artificial worlds which it creates serve as extensions of and analogues to that brain’s own development. We will spend time describing the more primitive aspects of the brain, since initial conditions usually provide the pattern which influence later development.
It is important to keep in mind that successful evolutionary processes often reinforce themselves, becoming more efficient in their outputs until this very efficiency yields inefficiencies. The cheetah is an example of an organism that is perhaps over-evolved in this respect. In attaining the speeds needed to chase down prey, the cheetah has sacrificed the robustness and strength inherent in big cats. This has led to intraspecific competitions over prey and low genetic variation, which in turn has led to sperm defects, susceptibility to diseases, and increased mortality. Inbreeding among wild cheetah populations has led to population bottlenecks. Efficiency in one aspect – speed – has led to inefficiencies for the cheetah in other aspects. We conclude that the human brain has become similarly over-efficient in some respects.
We conceive of the human brain not as a single entity, but rather as a complex or collection of related entities. We conceive of these entities not so much as organs, but rather as processes. They are highly integrated processes which interact with one another. Although the human body itself can also be recast in the same way as a process or series of processes, we concern ourselves with the brain, since it represents the development that is responsible for more changes to the Gaian system than any other biological factor. Humans will come to view this brain as an interaction, rather than as an independent entity. It will be seen as a coevolutionary mechanism which interacts with other biological mechanisms (other species) as part of a larger system. Although the brain will eventually be conceived as a process rather than as an object, a process intimately connected to other biological interactions (other species), it can be useful to study the brain’s development, structure and operations in isolation as if it was an independent entity.
In the past, the human brain was thought to have evolved over time in linear fashion from a reptilian to a mammalian to the human brain, with each ‘layer’ being added on top of the others. Yet these three ‘brains’ have not added to and placed on top of one another by accretion. Evolution is not a linear progression with a more advanced organism developing out of an earlier, more primitive animal in serial, stepwise fashion. Mammals did not arise from reptiles, and humans did not evolve from all other mammals at the end of a long chain in linear stages.
Mammals and reptiles had a common ancestor, and early mammals existed alongside early reptiles. Humans and most other mammals possessed common ancestors, so that the major lineages of mammals graphically resemble a branching system where each major group continues to evolve alongside the other groups. It has been said that the evolution of biota resembles a chaotic bush more than it does an orderly line. Throughout natural history, complicated nervous systems and relatively intelligent organisms have evolved and gone extinct more than once.
Even today, intelligence is not the exclusive characteristic of the order mammalia. Some avian species, which are believed to be descended from the dinosaurs, are capable of tool use, language, and the complex emotional bonding which was once thought to be the exclusive province of mammals. Cephalopods such as octopuses and cuttlefish also demonstrate intelligence despite having completely different, decentralized brain anatomies. It is thought that the different strategies employed to exploit food sources are at least partly responsible for the development of intelligence in these nonmammalian organisms.
Cortical and limbic processes have been ‘adjusted’ by evolution in inconsistent ways in different taxons of organisms. This has led to differential brain development among disparate kinds of creatures. Therefore, more sophisticated layers of the brain are not built over more primitive ones. For these reasons, the concept of the triune brain developed by the neuroscientist, Dr. Paul MacClean, in the 1960’s has fallen out of favor. MacClean described the human brain as having a primitive, reptilian function responsible for the protection of the organism and for basic drives, a mammalian midbrain responsible for emotions and social connection, and the cortical laminae in humans and other ‘higher’ mammals responsible for functions such as language, mathematics, impulse control and executive functioning. Though very useful conceptually, and although it dovetails with the Freudian tripartite personality of an id, ego and superego, more current research has cast doubt on the accuracy of the triune model. It should be regarded as a metaphor rather than as scientific fact. We use it as such.
What remains beyond dispute is that humans have evolved cognitive processes which seem to operate alongside more primitive brain processes. These simpler processes must have existed in more primal creatures, or evolution as a theory of development carries little weight. These primitive processes are responsible for the organism’s survival, and for the survival of the species of which the organism is a member. Thus, at least some brain processes have a protective function which is ancient in origin. They seek to prevent or minimize distress. These are what MacClean would categorize as reptilian in nature.
Primary Brain Processes
In biological evolution, there occur events of the first order which are inescapable. For instance, on the Terran world, most life is dependent upon oxygen, all life is associated with the carbon molecule and, on some level, with liquid water. Thus, first order principals for aerobic life on earth are that water, oxygen and carbon are required for an organism to live.
Biological evolution has yielded sentience, and sentience is also subject to basic principles. A first order principle of sentience is associated with primordial brain processes. It explains much about what motivates sentient life. We restate that initial conditions usually provide the pattern which influences later development. At root, an organism is concerned with survival. Its two priorities regarding others are the assessment of threat in its environment and the selection of potential mates. Most ambulatory creatures will defend territory and are motivated to satisfy hunger, thirst and to maintain a static body temperature. When these drives for safety, sex, territory, food, water and thermoregulation are satisfied, balance is maintained, as registered in basic brain processes. These initial conditions must be met, and the pattern laid down by primary brain structures seeks to assure that these conditions are satisfied. All domains of the human brain – whether regarded as sophisticated or primitive – act in concert to assure survival. Thus, a first order principle of sentience is that it acts in service of survival.
Evolutionary theory holds that these basic survival instincts, when channeled through the more highly evolved brain processes of more sentient lifeforms, are expressed as pleasure or fear, as the fear response is mediated in the amygdala in the brains of humans. The amygdala creates and holds memories related to fear, and is thus thought responsible, at least in part, for fear conditioning.
Conditioning itself does not arise out of the organism in isolation, but rather results from the organism’s response to environmental conditions. Evolution occurs not only at the molecular level within the organism, but through the process of natural selection. It is the result of an interactive process. Intelligence, and sentience itself, did not arise out of a vacuum, out of nothing, but rather developed as an adaptive response to the environment. These responses are derived, in part, from the organism’s need for security. Yet once it attains this security (this homeostasis), the organism’s fear response often shifts to another feared object. Equilibrium is maintained for a time, but since environmental conditions are always changing, so, too, does the brain continually assess for perceived threats. Once a threat is perceived, the fear response is again triggered. Equilibrium is thus a temporary state within the brain. The brain seeks balance as much as the environment in which it operates seeks equilibrium.
The brain seeks to prevent or minimize distress, which is experienced on an emotional level as fear. Yet since the ecology in which it operates is in constant flux, the brain is, on some level, never fully in balance. The brain simply turns its attention to other objects or conditions in its environment which it perceives as threats to survival. No ecology is ever fully balanced. In response, the brain experiences a background level of fear, an ambient anxiety.
Fear is therefore a global problem, not a problem associated with any particular object in the environment. As there are laws of conservation for matter and for energy, we also believe that fear itself is conserved within the brain of every organism. The net amount of fear in any brain system remains constant, and simply shifts to other objects or environmental states once a fear associated with a particular object or condition is resolved.
Although the mechanisms are not completely understood, it is clear that threats to survival are experienced in more sentient organisms as fear or its derivatives. Anxiety, along with the pleasure principle, are conceived in Freudian terms as the default emotions. Anxiety is the baseline emotional and energetic state which is almost always operating in the background of any organism which can experience emotion. The organism may be conscious of this emotion, or it may not be. Yet survival requires that the emotion be present on some level. The primal brain is motivated by anxiety (something to be avoided) and pleasure (something to be approached). When basic brain processes trigger hyperarousal in the brain-body, these same primitive processes seek to restore homeostasis to the base brain mechanisms as well as to the body as a whole. These mechanisms seek the reduction of anxiety to tolerable, baseline levels. They seek to restore balance. On the opposing side of the fear response, the pleasure-seeking brain seeks satiation, which is a return to homeostatic equilibrium.
Learning takes place in the context of these two states. Reinforcement occurs when certain neurotransmitters are released based upon the appearance of pleasurable or painful stimuli in the environment. Dopamine mediates the experience of pleasure and reinforces learning, pairing certain stimuli with its release. Adrenaline and noradrenaline are associated with anxiety. They prepare the body for aggression and escape responses.
The base emotion which triggers these escape/aggression scripts is fear, which may be expressed in any number of derived emotions, the chief among which is anger. Anger is a ‘safe’ emotion since it is an efficient vehicle for the release of the energy for the fight/flight response. Anger, as mediated through neurotransmitters, creates energy. Energy allows work expressed as aggression (fight) or in the creation of distance (flight) from the feared object. The fight/flight response is roughly categorized in the grosser category of avoidance.
The pleasure principle represents the opposing side of the equation. It is mediated by the release of other neurotransmitters which can be utilized for approach goals such as sexual behavior, the consumption of carbohydrates and other behaviors related to survival.
Fear was at first based upon survival needs. Organisms feared that which represented threats to their survival. As humans evolved into intelligent beings capable of modifying their environment and insulating themselves from basic threats to existence, direct threats to survival receded. As they enveloped themselves in their increasingly sophisticated civilizations, people no longer needed to concern themselves with attacks by wild animals, death at the hands of competing bands of hunter-gatherers, or where the next meal was coming from. One would expect the net amount of fear to decrease in correlation with the decrease in threats which bore directly on human survival. In their current artificial environments, the number of threats to survival are, as measured from a baseline level, quite low for most people when compared to the array of threats which confronted humans on the savannah. Yet the net amount of fear in each individual, brain-based system, and the net anxiety held collectively in human societies, has actually stayed constant over millions of years. Although this cannot be measured or proved empirically, it may be inferred by the average, baseline emotional state of most people, and by the conduct of their societies at large. Most people remain preoccupied with fear. Nations arm themselves against other nations. The most technologically advanced societies arm themselves with most sophisticated and destructive weaponry. Individuals arm themselves against other individuals, and the most educated and wealthy individuals are usually the most well-defended, often living in gated communities with sophisticated security and surveillance systems and even private police forces. People remain as fearful as ever. This human angst is not what one would expect to find. It challenges the Maslowian assumption that the satisfaction of basic needs is the solution to many human problems, and particularly to the problem of fear.
Basic brain networks continue their operations, often below the threshold of conscious awareness. They have been conditioned over millions of years to assess for threats and to seek instinctual balance. Yet most direct threats to human survival have been eliminated, and instinctual balance is more or less attained rather easily in the most advanced technological ecologies. If people in developed economies become hungry, they eat. If they are thirsty, they have ready access to potable water. They have adequate, heated shelter and protection from predators. They are generally safe from hostile individuals and groups. And yet, people are probably as afraid today as were their ancestors 50,000 years ago. Although this cannot be proved, the investment in armaments on a national level, small arms for individuals and other surveillance and security measures serve as anecdotal evidence that the net level of fear remains constant despite direct threats of survival having decreased for a substantial proportion of the global population.
In many instances, an unneeded behavioral or physical trait atrophies relatively quickly when it is no longer needed. Examples include prey naivete, ground nesting and flightlessness of birds on islands where no land predators are present. Yet the human brain has not evolved in the same way. This may be because threats to survival may quickly reappear, even in developed societies. Those threats may be in the nature of other humans. In post-industrial cities, street violence remains a relatively common possibility. Food or shelter may become scarce in times of war or revolution. Beyond these explanations, it has only been over the last few hundred years that basic survival needs have been met in the developed world. On other regions of the planet, subsistence level agriculture is still the norm. For these reasons, the basic brain’s fight or flight survival response remains alive and well. And this means that fear remains preoccupying emotion.
Since civilization is a relatively new development when compared against the timespans across which evolution operates, the human brain has not had sufficient time to fully integrate the executive functioning (impulse control and planning) of the higher cortical layers. The limbic system (midbrain emotions) common to other mammals is more highly integrated into other brain processes than is the neocortex. The limbic system mediates emotional and sexual responses as well as hormonal release and temperature regulation. It also helps maintain and recall memories. It coordinates reactions to environmental stimuli, and significantly, it mediates fear and anger. It assists in the regulation of mood, judgment and motivation.
Emotion is key to survival responses, which involve five reactions: fight, flight, freezing, hiding or submission. These five reactions all involve avoidance goals to the extent that they seek to avoid injury or death. These five responses are sometimes referred to as fight-flight, as hyperarousal, hypervigilance or the acute stress response. The intensity of fear and aggression correlates with the intensity of these survival responses.
These survival instincts, although they are associated with ancient brain processes which predate the intellectual capacity of modern humans, influence cognitions. Thoughts associated with the fight-flight response are mostly negative. Indeed, most mental content is negative. Negative stimuli are prioritized. Stimuli perceived as negative (as threats) are selected out from the environment and highlighted. Neutral or ambiguous elements are interpreted as negative, since this provides a survival advantage. In the event that a neutral or ambiguous object or condition is or becomes an actual threat, the hypervigilant individual may survive, whereas an individual who fails to recognize the danger may not.
In the purely human world, negative events (and negative words) seem to be more well-remembered. We tend to ‘value’ criticisms over compliments. Paying attention to negative stimuli may have once provided a decided survival advantage in pre-human societies of hominoids and other animals, as well as in more primitive human cultures. Thus, attributing hostility in neutral or unclear situations may be an important factor in survival, and in determining whether the acute stress response is activated.
Yet in the purely human world where people deal mainly with other people, hyperarousal, as a survival mechanism, can be self-defeating. The fight-flight response can be counter-evolutionary. The orientation toward the negative, the tendency to perceive danger where none exists, can lead to disaster. This is often seen in collective human behavior between potential belligerents in war. Misperception and miscalculation often lead to the causus belli, to the actual conflict. World War I was triggered, in part, by such misperceptions. It is said that wars occur when perceptions of power shift. Hostile intent may be read into situations where none exist. In addition, the perceived ability to control can also lead to the triggering of the fight response in particular.
The perceived ability to control events is related to specifically to anxiety and aggression. An organism’s estimate as to its ability to affect and determine outcomes has a bearing upon the exercise of fear and hostility. If an individual overestimates control, aggression may result. If an individual underestimates control, anxiety or hostile acts may result. Key to maintaining a peaceful outcome is therefore balance.
From an evolutionary standpoint, rapid reaction provides an evolutionary advantage. Sentient organisms did not and do not always have the luxury of contemplation. The fight-flight response is triggered by elements of the autonomic, sympathetic and parasympathetic nervous systems, and by hormonal and neurotransmitter cascades which have some measure of independence from the rational processes centered in the human forebrain. In times of perceived crises, psychological forethought is discouraged, and may even be leapfrogged through neural ‘links’ which detour around rational brain centers.
The survival responses regulated by the autonomic nervous system cannot be voluntarily controlled. They may be conditioned to occur in response to certain stimuli with which they are paired, so that basic brain processes are not only aroused by stimuli from the environment which impact survival, but also from signals learned from the environment. When filtered up through the higher brain centers, these anticipatory states can yield strong emotions and psychological imbalance. These emotions can be habituated. The feelings can be aroused upon the emission of certain environmental signals which ‘convince’ the organism that a problem exists in the environment. In these cases, the responses of the autonomic nervous system are often cued when the human, a highly social creature, experiences a social problem such as rejection. Here, hyperarousal may be elicited by perceived rejection, and be mediated by emotions such as fear and anger. Thus, the basic brain processes can become crossed with emotions and cognitions having little or nothing to do with actual survival. A road rage incident serves as an example of how a perceived social slight – being cut off in traffic – can trigger the fight response. Cognitions, perceptions, emotions and survival reactions can become tied together in dysfunctional ways. They can be triggered by stimuli which have little or nothing to do with actual survival. A soldier involved in combat may hear a car backfiring and this may trigger hyperarousal. She may know that it was only a car backfiring, but become hypervigilant and be unable to sleep nonetheless.
These processes continue in operation, often along automated, unconscious pathways, even though most of the stimuli in humanity’s technologically-centered environment do not represent actual dangers. These very basic brain processes ‘believe’ they have a function. They have little to no ability to recognize, register, and interpret social nuance. They cannot distinguish being cutoff in traffic from a traffic accident. These primitive brain operations will perceive everything in the immediate environment as threat/nonthreat, whether stimuli in the environment are actually threatening or not. In the case of an individual human, most stimuli in the current surroundings of the basic brain are not a threat, yet this will not prevent the survival response from perceiving many of them to be raw threats to survival. They are dichotomous, off/on switches. There is a certain threshold, a tipping point, beneath which homeostasis is conserved. The switch remains off. Yet if this threshold is breached, these more primal survival responses will automatically launch to the fore and overwhelm conscious deliberation. The switch is turned on. The impulse control and executive functioning of the organism may become overwhelmed and survival scripts will be tripped. The fight-flight reaction overrides the intellect and assumes control.
Mediated through the midbrain response system, threats will be experienced as the emotion of fear. Items in the environment which are conducive to survival – such as the satisfaction of basic appetites – will be experienced as pleasure. Based upon whether they induce fear or pleasure, stimuli will either be approached or avoided. Fear, as experienced by higher order organisms such as humans, is simply the byproduct of a pattern for survival, a pattern with programmed behaviors that have become automated, engrained and habituated into instinct or semi-instinct.
More primitive brain processes influence higher order brain centers like the midbrain complex as well as the cortical brain. Basic brain mechanisms do not understand many of the abstract messages of the higher order brain, nor can they communicate in these languages, since they understand a very simple ‘syntax’ of survival. The basic brain either approaches or it avoids. When it avoids, it responds with one of the five reflexive responses described above. Basic brain processes have no comprehension of time, subsisting in an eternal now. Since the language of the neocortex (reason, impulse control, planning and thinking through consequences) is unintelligible to it, and since it is able to bypass these less well-integrated, higher order brain centers and reflexively communicate with the body, efforts at control often fail. The result is often violence.
In contrast, the more evolved temporal lobes have reason, delayed gratification, and the ability to plan as tools at their disposal. They are endowed with a highly developed sense of time. They can backcast – they can remember. They can forecast – they can predict.
This isn’t to say that less evolved aspects of the brain do not exhibit the qualities of memory. Pleasurable experiences are highlighted and recalled. Pleasurable stimuli tied to survival, such as a mating experience, may be useful for survival. These are highlighted and the basic brain classifies these into approach goals. Harmful stimuli which cause pain are associated with fear and are to be avoided. Pain is remembered. It induces fear. Yet these memories of pain and pleasure are most likely not associated with the abstract ideas of past and future with which a more evolved sentience is. For higher order brain processes, these memories are associated with the organism’s sense of time.
Since more primitive processes do not involve a sense of time, they will operate as though a past event is happening now. Memories trigger a physiological response set of approach or avoidance, regardless of whether a stimulus is presented now or rather is the product of the imagination, and thus of a memory from the past or an anticipation of the future. These primitive mechanisms conceive in terms of images, and cannot distinguish whether that image is from the past or from the future. They experience everything as happening now and instruct the brain-body accordingly.
The brain’s warning system is very basic. It cannot distinguish between actual threats to survival and stimuli which may be interpreted negatively by the limbic system or the cortical brain. This warning system reacts as if each negative emotional experience were a threat to survival, whether it is a social snub or a potential attack from someone with a weapon. Neither can the survival response distinguish between actual stimuli and invented stimuli proposed by the brain’s imaginal mechanism, such as a fantasized fight with a man holding a knife, or romantic fantasy with a potential mate. The brain and body will prepare their approach/avoidance reactions identically whether an actual danger exists external to the self or whether the imagination proposes that this stimulus exists. It will react identically when the organism recalls a painful memory from the past, as in the case of a soldier suffering from PTSD who hears a car backfire, which reminded her of gunfire she experienced in battle. It will respond the same way when it anticipates a pleasurable experience in the future, as Pavlov’s dog salivated when it heard a bell ring, which it associated being fed. Thus, memories and fantasies evoke the same fight/flight responses as actual stimuli in the environment. This occurs because many of the mechanisms within the basic brain have little to no awareness of the outside world beyond its immediate physical parameters, they cannot always distinguish between past events and the present, and they react as though proposed stimuli – such as what may happen in the future – are actually happening.
The stress response has failed to keep pace with actual developments in the technological world. This is because evolution acts cross longer timeframes than the accelerated world of secondary evolution promoted by technology. Biological forms cannot evolve fast enough to keep pace with technological evolution. Thus, even when the actual threat to survival is removed as many have been in our purely human world, the threat response remains. It continues to assess for threats in environments where no threat may be present. It has its own reason for being. And that which has a function will often seek to perform its function, whether it is needed or not. Basic brain mechanisms seek to protect the organism and to serve the organism’s own interests, even when these are at odds with other members of its group. The base brain continues its operations even when other, more highly-developed, rational aspects of the human organism see the self-destructive consequences of instincts which react even when there is no threat to survival. And so, even though it is irrational, the primitive brain processes still respond. They still react.
The base brain seeks to perpetuate itself and almost always, on some level, represents the interests of the organism in which it is housed. It is primarily a system of alert and defense and appetite, and so it also ‘believes’ that, through its constant stream of warnings, it protects the organism and represents the individual’s own interests. Civilization has minimized threats to human survival and maximized access to the satisfaction of basic needs. Yet just because these most primitive brain mechanisms are less relevant and have less to do than they did in their deep evolutionary past does not mean that these primitive threat detection and responses of the first order have gone away.
Humans underestimate the power of the primitive brain at their peril. Billions of years of genetic programming do not relinquish their hold simply because the higher brain structures instruct the instincts to suspend their functions. The base brain is largely automated at this point. It is preverbal and prerational. It continues to operate, usually in the background, outside of conscious awareness. Only when it becomes troublesome do humans notice it. Perhaps a person gains weight, or loses a job because of a bad temper, or a marriage collapses due to infidelity. Then, the individual becomes aware of the operations of the base brain and the patterns it has established. Then, the individual seeks to control these impulses. Control is almost always the first strategy employed.
What is key here is that these approach-avoidance instincts no longer act on behalf of human survival. They now act on behalf of human desires, for they translate desire as the satisfaction of instinct, and also communicate to the midbrain that satisfaction of instinct will sate desire, and thus restore balance within the limbic system. In desire, it sees the organism’s survival needs expressed. It messages the midbrain with the ‘thought’ that the satisfaction of instinct, which the midbrain experiences in the language of emotion and desire, will restore balance to this second order brain.
Now, the basic brain places itself in charge of the organism’s prosperity, not just the individual’s survival. Primal drives are translated by higher order processes into a single-minded pursuit of the individual’s success. The individual sees survival as thriving. Since they can perceive only very basic, yes/no stimuli pertaining to survival, these base processes can ‘think’ in terms of only very basic, black/white response sets once they exceed a certain threshold. And they can only express themselves in those same dichotomous channels. They are binary. They know no shades of gray. They neither perceive nor can they respond to subtlety. Yet they filter powerful impulses through the emotions of the limbic system and the rationality of the temporal lobes. They may not be able to reason, yet they will rationalize, a poor substitute for logic. Like a virus which lacks its own reproductive machinery and hijacks the reproductive machinery of human cells, the impulses of basic brain mechanisms can commandeer an individual’s rational side and use his own reason against him. They use an individual’s own cleverness against them. If left unchecked, these impulses can be expressed as ambition, greed, lust, gluttony, and a host of other excesses. Some of these drives may actually work against an individual’s survival. Moe gravely, these drives may work against the survival of our species, for the ambition of a world leader can lead to world war.
The approach/avoidance scripts of the basic brain are often translated by the higher order brain into abstract goals such as success, wealth or fame. When these more abstract achievements are threatened, the lower order brain, not having a very sophisticated lexicon, may interpret their denial as a threat to survival. Filtered through the amygdala, the hypothalamus, the hippocampus and the limbic cortex of the midbrain, these threats are experienced as fear or one of its derivatives. This induces a stress response, and stress lowers the ability of the higher order cortical brain to control impulses. It brings the brain down to the most primitive levels in its decision-making capacity. Choices are now based upon and governed by emotion and instinct. Executive functioning breaks down. The ability to see options narrows in a cognitive sense since the higher order brain is sidelined. Impulse control, as a strategy, fails.
Much is made in current research about the triune brain’s obsolescence as an accurate model of brain evolution. And yet, highly-evolved species do tend to preserve brain structures which govern basic survival responses. The human brain, for instance, houses within it the less evolved hindbrain. This brain is associated with reptilian characteristics, as mammals and reptiles shared a common ancestor. The function of this protoreptilian brain is to maintain homeostasis within the organism. It does this through approaching opportunities meant to maximize survival, and avoiding threats which are perceived to minimize survival.
What correlates with the drive to survive (constellated in approach goals) and with fear (constellated in avoidance goals) is the existence of individual self-interest. On a psychological level, it is often asked which came first: this sense of self-interest or fear. Those who ask this question recognize that fear and self-interest are responsible for the ills perceived in the human world. Indeed, it does not take much observing to see that fear is the basic negative emotion from which all others are derived, and that actions motivated by fear, based on self-interest, have enormous destructive capacity. It has also been obvious since the dawn of civilization that self-preoccupation, which triggers a fear response in the brain, is the root of all which humans call evil in their history.
Approach goals based upon the satisfaction of basic needs for food, water, territory and self-reproduction are often translated by the higher brain centers into desires, which civilization has been designed to fulfill. Examples include avarice, lust, ambition and pride. Yet these drives play a central role in the self-destruction wrought by an intelligent species such as humans. Civilization is in some ways an edifice erected to satisfy human appetites which finds their roots in the satisfaction of basic needs. Whether seen as approach or avoidance, the drives of base brain mechanisms, when given expression through emotion and desire, often seek to marshal intelligence. With the enormous intellectual and technological capacities at the disposal of these base drives and desires, the carrying capacity of Gaia is threatened.
In summary, approach and avoidance goals are not a solution to human problems. Though they once served the needs of a species closer to the edge of survival, these two goals now threaten the survival of homo sapiens as well as the remainder of the Gaian biome. The false conclusion made by the higher order brain is that these impulses are controllable by the intellect. Yet it is perhaps more often the case that the intellect is exploited by them.
Approach and Avoidance
The basic human brain mechanisms beyond which the higher order brain layers have evolved prioritize certain information in the environment, assessing for threats and selecting out opportunities for possible mating as the highest order priority stimuli. These mechanisms are territorial and concerned with satisfying drives to eliminate hunger and thirst. If this is doubted, it is easy to observe organisms which exist alongside humans which exhibit territorial behavior and seek to satisfy basic instincts. Perpetuating the species and survival are the prime goals of all organisms.
As noted, these primal processes are largely automated, and therefore their reactions are automatic. The basic brain mechanisms lack the comparative subtlety which the cortical laminae afford humans. These basic functions assess for threats, but cannot distinguish actual threats to the survival of the organism from adverse stimuli which do not rise to the level of life and death. These primitive processes may have little awareness of the outside world in a social or emotional sense, and almost no awareness of the representation of the world modelled by the intellect. They do not exhibit the capacities for abstract judgment, social intelligence or social nuance. They are not rational. They are prerational. They conceive in terms of images and other sense impressions. This is their language.
These functions are least evolved yet longest present in time, representing many millions of years of conditioning. As such, their reactions are lightning fast, involving the sympathetic nervous system. They involve no forethought, being the response of reflex, instinct and conditioning, as mediated by norepinephrine, which functions as both a hormone and neurotransmitter that prepares the body for action. These systems have little to no capacity for reflection, for verbal language and none for mathematical articulation. They are incapable of abstraction or philosophical speculation. They ‘think’ in concrete terms, with little to no awareness of time beyond immediate horizons, existing in a perpetual now. Lacking a time sense, without the capacity to deliberate, they react. Incapable of forethought, they have little ability to plan. They represent a collection of automatic and relatively autonomous functions of the organism designed to protect it from harm and to perpetuate it.
The primal brain’s reactions are automatic and elicit survival responses regardless of whether the organism is actually physically threatened. Glandular secretions involving certain neurotransmitters are produced which trigger cascading responses that prepare the organism for approach, avoidance or aggression.
If stimuli are perceived as nonthreatening and advantageous to survival, the approach mechanism is triggered. If a threat is perceived, the dichotomous avoidance/aggression response can be further broken down into fight, flight, freeze, hide or submit. Freezing and hiding are often practiced as survival responses by pre-adult organisms and by those organisms which do not have the capacity to flee from or fight predators, such as in organisms which utilize camouflage as subterfuge, bright colors as warning signs of toxicity, or burrowing animals. Submission is often a survival response practiced by social creatures which participate in groups where ranked hierarchies are present. Nondominant animals will submit to those dominant in the hierarchy. These social orders are found in mammalian species. The mammalian brain evolved in part to adapt to social situations, and so, this feature is attributable to the limbic system, and not to the most primitive brain processes. Descended from reptiles, birds exhibit the functional analogue to a limbic system, but along with reptiles have evolved different structures in their brains which serve as equivalents to the mammalian midbrain structures. This may account for the high intelligence and cohesive social orders of some avian species.
In many mammalian species, the limbic brain represents a shift in emphasis, a qualitative leap, for it is uniquely attuned to other organisms within its social group. With it, an organism can perceive nuance. It can recognize faces within and outside of its social group. It can also recognize facial expressions. An organism may react with fear when members of outgroups are encountered.
We have stated that, through basic brain mechanisms, all stimuli are perceived in terms of survival. These processes lack subtlety, and so everything perceived sensorially is either a threat or an inducement to survival. The primitive brain cannot distinguish beyond this approach/avoidance duality. This duality reverberates throughout the remainder of the more advanced structures within the brain as well. This either/or, black and white, global thinking permeates human perception. The brain is so conditioned to ‘think’ in these terms that the computer systems designed by Biological Intelligence were engineered in binary form with ‘yes/no’ circuits. This artificial evolution is a mere outgrowth of biological evolution, and AI an extension of BI.
The higher parts of the brain can think in grays and detect relativism and subtlety, but this is not the first reaction of the human brain as a whole. The basic brain thus conditions humans to see and operate in terms of sharply defined dichotomies. There is right-handedness and left-handedness. This dualism is at the bedrock of human religious and moral thought, at least in Western approaches derived from Zoroastrianism. It influences communications of a religious, political and moral nature. There is good and bad, right and wrong, heaven and hell. Even in science, there is a tendency to view the physical world in terms of dualities. We have positive and negative polarity. There is matter and antimatter, dark energy and ordinary energy.
This isn’t to say that there isn’t a substrate in the universe which exists outside the human brain that tends to exhibit itself in terms of polarities. Yet the brain which perceives this dualism, itself divided into two hemispheres, tends to operate and perceive in terms of symmetries. The filter of the basic brain will either see everything as something to approach or as something to avoid (which can include attack). And though the cortical layers are larger than both the midbrain and base brain structures combined, the ancient evolutionary tendency to perceive dichotomies continues to dominate the brain as a whole. Contemplation, generated by the neocortex, is most often experienced as an afterthought, or a thought about a thought. It remains much easier for humans to react than it is for them to be proactive. And this makes them vulnerable to behavioral conditioning, another term for manipulation.
Fragmentation
In the human past, the brain only needed to track a limited number of environmental stimuli in its immediate environment in order to protect its host. As data flows increased to the level found the current informational environment, the number of stimuli the primal brain was required to track increased dramatically, and included ‘threats’ which existed in the background, often in a remote environment. Data networks, which exhibit the qualities of an emergent system, transmit and receive data at increasingly rapid rates. Filtered first through their base brains, humans react to this received data as either threat or as something to be approached.
The brain has not evolved in an environment where such large quantities of information were available, at the speeds at which that data bombards it. Its ability to process this enlarged, rapid information lode is compromised. Since the binary, approach-avoidance filter of the basic brain cannot distinguish between actual threats to its survival and other types of data which trigger the fight-flight response, the organism may become overwhelmed, or else desensitized, to the information content it receives.
The quantity of data available to the brain may exceed its capacity to process it. The base brain may experience information overload. This can cause various brain processes to ‘freeze up’. This numbing response, often experienced in times of trauma, is protective. Desensitization protects the brain from overstimulation and from the deleterious effects of constant adrenal responses. Numbing also affects intermediate and more advanced brain processes. Decision-making, executive functioning, impulse control, the ability to plan, and other discretionary functions may be compromised, as the acute stress response forces the brain down to more primitive levels once threat is detected, since these threats are given priority. When survival is at stake, primitive brain processes assume control.
This is the data environment in which the human brain currently finds itself immersed. Huge quantities of data reach less evolved brain structures which lack the nuance to effectively filter, process and interpret this data, repeatedly triggering survival responses, which have an emotional component (fear or pleasure) as well.
Beyond the desensitizing effects of numbing and the overstimulating effects of hyperarousal, the brain may employ other strategies in times of data overdose. It may shrink its attention span as well as increase processing speed. By shrinking its attention span, the brain can process data from more sources or streams, while not paying as much attention to data from each individual category. This increases the breadth of information of which it is aware, while decreasing the depth of its knowledge about each individual source. In this case, the brain may assume that it knows more than it actually knows. Each unit of information is given less attention. Attention deficits and hyperactivity may result. Indeed, there has been an increase in the diagnosis of Attention Deficit Disorder (ADD) and Attention Deficit-Hyperactivity Disorder (ADHD). The ‘epidemic’ of ADD and ADHD corresponds with the time frame in which internet capabilities increased in some nations. Though this correlation does not equate with a causal relationship, and could in part be due to selection bias and other factors, the connection between information overdose and attention shrinkage in the general population should be further explored.
In an environment with too much data to effectively process, the brain becomes a sampler and a filterer. It becomes expert at prioritization, but less well-adept at deeper contemplation of any issue. It searches more and more for truisms, for memes, for short and simple explanations, and for facile analogies which do not represent the depth or nuance of any issue. While the tendencies to generalize and oversimplify have always been human cognitive propensities, this type of short-term conditioning is deepening as more data streams become available. This filtering process makes sense, since the huge quantity of information makes it practically impossible to consider all of the data available on any issue before a decision can be made. Yet it has side effects such as attention deficits, shallow thinking, sensitization, hypervigilance, and emotional reactivity. It also requires individuals to choose certain sources of information over others.
The basic brain deals in the language of images and processes these images rapidly. It then chooses a response. It reacts, always with an impulse from its limited menu of options. When bombarded by vast quantities of information coming at it through multiple channels, the speed of its responses may increase and the number of responses it generates may become more numerous, but no more nuanced. Hypervigilance may result from its hyperaroused state. With the increasing flow of data coming at it, the frequency of base brain operations increases. It puts out more reactions per unit of time. The rate at which it perceives and reacts accelerates. There is a filter-up effect whereby the higher evolved brain is affected. Speech becomes more rapid. Executive decisions must be made faster. Human action is accelerated. The compression of time for the individual leads to a compression of history for humanity.
Another response to data overabundance, which we identify as a form of pollution or poisoning, results from the filtering process mentioned above. The primordial brain may ignore the majority of the data it receives simply because it lacks the capacity to process it. The brain has the obvious ability to assess incoming information and prioritize it. It may filter out and dispose of data which does not immediately effect survival. Information overload forces the brain to regard the short-term. It discourages deeper contemplation. The primal brain may pay attention to its immediate environment, and ignore the background. Who, for example, pays attention to weather forecasts from the opposite side of the globe?
The brain may also ignore certain kinds or classes of data. The basic brain does prioritize novel stimuli, since these may pose a new kind of threat, triggering avoidance. Or these new stimuli may represent a new opportunity which increase the odds of survival, such as a new food source, initiating an approach response. Yet once it becomes habituated to data of any class, the basic brain may also tend to ignore that data class or to deemphasize it as extraneous information which is not relevant to survival. By ignoring certain kinds of information, more irrelevant data can be excluded from consideration. The brain can avoid processing classes of information in the flood of data which bombards it. This prioritizing, filtering function is one adaptive response to data pollution.
However, such background data may not have an immediate effect on survival, but it may have an impact on the survival of the individual or the species in the longer run. For example, an individual may see that smoking provides the short-term gains of alleviating tension, providing energy, and decreasing appetite for weight loss. Yet examining the long-term effects of smoking is not prioritized by the base brain, which tends not to think in terms of the longer term or in terms of consequences. This long-term cost-benefit analysis is postponed. On a collective level, the use of nonrenewable fossil fuels is harming the environment, according to the weight of scientific opinion. Yet the background data effecting climate is longer-term, since climate by its nature operates over longer time scales than a day, a week, a month, or even a year. It is also classified as background data because planetary overheating often reserves its worst effects on people living in remote regions such as oceanic islands or polar regions. These are the classes of information that base brain processes most often ignore, since primitive brain mechanisms are geared to shorter term time horizons in space as well as in time.
This is precisely the type of data that humans have been ignoring from a collective perspective. It tends to be de-prioritized in favor of shorter-term gains which are economic, political or cultural in nature. Although some individuals have awoken to the environmental damage created by such short-term thinking, longer-term planning is easily thrown out or sacrificed when short-term pain results from implementation of policies designed to ensure the future of the planet.
Ignoring contradictory information reinforces preexisting belief. By ignoring information contrary to previously formed attitude and belief, the brain also pretends that the information isn’t there. This reinforces a picture of the world that serves up further evidence of preexisting attitudes and beliefs. The data that is selected for and processed often conforms to preexisting attitude and belief. This is a self-reinforcing process that fragments populations based upon issues, political and economic affiliation or values orientation. This process strengthens a sense of identity with an individual’s group as well as amplifying differences with any group which chooses to prioritize different classes of information.
Yet not all data which contradicts preexisting attitudes and beliefs is ignored. The brain also tends to cognize new or contradictory stimuli in such a way that these stimuli conform to preexisting attitudes and opinions. An example is the medieval dunking stool, by which witches were sometimes tried by ordeal to see if they were in fact witches. If the individual drowned while held underwater, then she was not a witch. If she survived, she was considered a witch and was sometimes burned at the stake. In this way, the results of the ‘experiment’ were made to fit belief, rather than allowing belief to change to conform to newly acquired information which contradicted the facts. The newly-acquired data was simply that some individuals could hold their breath longer while underwater.
The connections between neurons within the brain exceed the number of sensory inputs which relay data from the environment. This means that the brain is, in some ways at least, more complex than the ecology which it perceives itself to be in. Yet it lacks the ability to completely and accurately assess that environment. Its mental models of the environment are often more complex than its ability to perceive the environment itself.
This means that the brain fills in missing gaps from the ‘picture’ it has of the world, to create patterns and resolve ambiguities. The hypothesis of predictive coding holds that sensory perceptions are in reality interpretations of perceptions based on an inner, mental model of the outside world. This allows the organism to ‘cheat’ in its sensing of the world upon which its relationships with and reactions to the world are based. The brain does this in order to filter out the statistical noise from the vast quantities of sensory information in its environment. The brain predicts, and when objective, environmental data does not conform to the prognostication, one of two choices are possible: the brain can change its prediction, or it can change the sensate data it has experienced to fit its forecast. The mental model can be changed, or the data can be changed. Since it is often less cognitive work to change the data rather than the inner working model of the environment, many choose to ‘fudge’ the data.
Data may elicit approach or avoidance strategies and trigger corresponding emotions. The basic brain selects which elements to perceive from the constant stream of data afforded the organism by the data-rich environment afforded by computer networks. And these primitive brain mechanisms perceive in their own way: not as collection of rational brain processes fluent in linguistic and mathematical terms, but in raw images and other sensory inputs which either trigger approach or avoidance, based on whether data is perceived by preexisting frameworks as opportunity or as threat.
These least evolved brain mechanisms see in either/or terms. Lacking subtlety, they cannot tolerate ambiguity. They are not rational. They are prelogical. They present the higher order cognitive centers with a very limited menu of survival scripts which have not kept pace with human cortical development. These scripted reactions are difficult for the higher-ordered temporal lobe systems to understand or even to control, since its latter-day evolution means that it may not be as well-integrated with more primal brain processes. Since the basic brain mechanisms react reflexively, they are programmed responses. Since they operate automatically, often below the level of conscious thought, their perceptions and their reactions to those perceptions often occur subconsciously. Instead of being deliberative, their reflexive reactions are literally lightning fast. The cortical layers, on the other hand, are the seat of often slower, deliberative processes.
Computers and their applications were not created by the primal brain, though they are often created for its use and enjoyment. Consider computer games, pornography, clickbait, and salacious news. These are designed for more primitive drives and to evoke emotional responses. Yet many applications created by the mathematical brain assume that the highly-evolved cortical layers can and do process and manage the prurient content which flows these platforms and programs simply because they were created by these higher-order brain centers. This skips a step. The primal brain always has first look at the images and other stimuli supplied by the vast networks at our disposal.
The limbic system deals in the language of emotion and social intelligence, and is, like the instinctual brain processes, often reactive rather than proactive. In the current, post-industrial data environment, it reacts to social media. Communication within these media is both rapid and rapidly reinforced. Communication through social media is a property of emergent evolution. This manifests as a social efficiency which coheres the members of a group to one another and organizes them against outgroups. The existence of opposing groups both defines and strengthens the boundary of any group. Instantaneous communication in a multiparty platform serves to make organization through social media more efficient, more rapid, and stronger. Flash mobs may self-organize and create instant protests, instant crime, or instant cancellation of an entire career or reputation. On the other hand, these same media can organize charitable drives, promote prosocial memes or fundraise for individuals struck by tragedy. Although this emergent behavior, the ability of individuals to sell-organize spontaneously, had evolved before the advent of social media, these networked, multilateral electronic media platforms reinforce the process of self-organization because they can communicate information with great speed as well as access other data at the same time, such as newsfeeds. This makes them powerful, and potentially dangerous.
Emergent phenomenon are newly-manifested properties of a macro-system which were not present in the individual parts of that system. When free to organize themselves, human groups tend toward spontaneous order. Think of a line at a supermarket or traffic along an expressway. Interacting individuals will yield self-organizing structures. Although scientifically controversial, the behavior of mirror neurons allows individuals to mimic the behavior of others in a group quite rapidly. The cingulate gyrus in the neocortex allows this mirroring behavior. This spontaneous collective action is evident in financial markets, which lack a central control, yet regulate the prices of commodities, currencies, securities and other monetized instruments. It can also be found in stadium crowds, which can rapidly self-organize around aural or physical cues. These emergent properties are evident in the self-organizing nature of cities and supercities, and in the development of languages. The internet lends itself to emergence, which self-organizes into an artificial social, economic and cultural world.
Higher-order brain processes also tend toward the pattern-forming efficiency of mimicry. The neocortex will fall back upon preexisting ideas and beliefs to facilitate its transactions within the cyberworld it has created, since it can neither practically nor logically analyze every new stimulus from first principles. Our more highly evolved brain complexes do this in order to save time, since we cannot investigate every fact upon which we subsequently must rely to make decisions. For these reasons, we make assumptions. We rely on information provided by others without checking the accuracy of information since we cannot possibly learn everything we need to know to live and to function efficiently during a typical human lifespan. We cannot learn how to build a car from scratch or go to school long enough to learn how to care for ourselves medically.
For this reason, the cortical brain must sometimes substitute facile explanation for true analysis. Looking for similar case examples from its past by which to generalize based upon the present example, it avoids contemplation where it can. This makes it more efficient. Yet this does not mean that contemplation is unnecessary in some instances., especially those involving the long-term effects of certain decisions. In considering the damaging, long-term effects of the implementation of certain technology on the environment, in the decision of whether to prosecute a war, in an individual’s decision over a serious health matter, or an economic investment, deliberation is a necessity. Yet the current data environment discourages this necessary reflection. We react rather than respond.
This rapidity of reactions is encouraged by the basic and midbrain structures, which are irrational and less reflective, since rationality and contemplation are not their currency. The effect of data overload on all three brain strata is to induce rapidity, reactivity, and emotionality. The higher-ordered, contemplative functions of the neocortex are conditioned out by information technology. They are not selected for in the current data environment in which humans find themselves immersed. This is, perhaps, one explanation to the riddle of the Paradox of Fermi.
These action-reaction cycles reach a point where they are self-reinforcing and self-sustaining. Another positive feedback loop reaches runaway status, and humans lose control of the Information Cycle. The short-term result is greater social cohesion within national and subnational political, cultural and social groups. The longer-term result is fragmentation of the larger group of which these smaller groups count themselves as members. Society devolves into warring sects organized around national identity, identity politics or other affiliations. The result is rapid fragmentation. The inferential evidence which supports our hypothesis can be found in the lurching of nations toward civil war, and the fragmenting international order toward total war.
What is lost among all this disintegration is a single, simple fact: the primary goal of the life force, as a whole, is a unifying balance. Yet the secondary environment through which network and machine intelligence has emerged has had a shattering effect. Many of the problems in the human world today are the result of this technologically-reinforced fragmentation.
Secondary Evolution of Data Nets, as Influenced by Primary Evolution
Humans have become captured by the Information Cycle, which induces individual as well as collective stress response triggered and retriggered by the abundance of low-quality information delivered at increasingly accelerated rates. This is a self-reinforcing process which amplifies differences between self-organizing groups of individuals.
More evolved organisms tend to conserve primal brain processes which are responsible for basic survival functions. The human brain houses within it a hindbrain region which maintains homeostatic functions within the organism. These most primitive mechanisms of the human brain are not concerned with cooperation as a survival strategy. The limbic midbrain processes are most concerned with cooperative efforts at survival.
From a data perspective, limbic midbrain mechanisms have an affinity for social media. These paleomammalian brain processes are drawn to patterns and aggregations which represent viewpoints similar to their own. This ‘part’ of the brain does not search out universals common to the whole species or to the biosphere as a totality. The limbic system shared with other mammals is atavistic, focusing on affiliations with family, extended family, band, clan and tribe, up to and including national affiliations. It may group itself according to nation, ethnicity, race, gender, class, or another organizing principle.
When sorting and prioritizing data, humans tend to ignore information which they find contrary to the preexisting tenets of the group with which they identify. An individual’s identity is now defined at least in part by affiliation with the group. In earlier human evolution, an individual’s survival depended on the survival of the small group. In social and eusocial species, if the group does not survive, neither does the individual. Human survival still depends upon group membership, for no one can meet all their survival needs alone.
Groups cannot exist without boundaries, whether the membranes are concrete or abstract. Nation-states are defined by borders, racial and some ethnic groupings by physical characteristics. Some cultures and religious groups may be defined by celebrations and rites, others by dress and music. The group can only exist based on differences between itself and other groups, whether these differences are physical, linguistic or cultural in nature. Often, these differences create conflict. Yet conflict itself may be necessary to group identity, serving as one of its organizing principles. Groups are defined by their differences and may have reason for being only as they exist in opposition to other groups. Sports teams provide a good example in this category. It is obvious that fans of different sports teams have allegiance and affinity to the ‘home team’ based exclusively upon geography, marketing (team logo and uniform) and the simple fact that their team competes with other teams. Otherwise, the differences are purely random. Yet, lest anyone doubt that identification with even these meaningless groupings has a powerful effect on midbrain and base brain processes, consider the fact that individual fans sometimes die as the result of disputes over team loyalty.
Social media and other types of media often operate by this principle of ingroup versus outgroup identification. Media amplify these distinctions since these channelized data streams are instantaneously communicated, received and reacted to. Electronic media do not cause atavism, but they do amplify it. Although the basic and midbrain processes of humans have existed for millions of years, the capacity of these subcortical systems to react, to defend and to organize are greatly augmented by media which now provide nearly instantaneous, two-way communications, and an array of new social groups with which to identify.
In the past, individual humans and their bands were only aware of the environment which they could sense proximately, and over which they had immediate physical control. Now, electronic media grant individuals and their small groups an awareness of an environment which far exceeds the more immediate environment regarding which they were evolved to respond. At its greatest extent, this environment is the entire Terran biosphere, and perhaps even beyond that. Only the neocortical brain corresponds to an awareness of a true species mind and of a universal Gaian system. Yet the processes in the brain which are given access through technology to this world-girdling data environment also include the more primal, survival-based and atavistic levels of the brain. It is these more primitive brain processes which tend by their very nature to override the cortical executive functioning which controls basic impulses. In other words, primal brain mechanisms have access to more information, covering a greater geographic scope, than they were evolved to process.
The kinds of data filtered through and broadcast by a given website, news outlet or social media platform often represents information consistent with the preexisting worldviews held by those who seek information from that site or platform. This creates a selection bias in favor of data which conforms to existing opinions. This observer effect is a consistent feature of human cognition. The observer bias is, as has been noted, amplified by electronic media.
Beyond its social sense, the limbic midbrain operations are also responsible for translating the primal instinctual drives of the base brain into emotions, which represent the language of the midbrain. It is perhaps no coincidence that beyond their cohesiveness, social media also engender strong emotional reactions among those who participate in their various online platforms and forums.
The emergent phenomenon of a collective consciousness – of which the worldwide web is a manifestation – develops in part through these social media, which in turn shape the structure of the web itself. These artificial, cybernetic structures are strongly influenced by limbic system processes, the paleomammalian midbrain mechanisms responsible for social cohesion and ingroup versus outgroup identification. An important, defining criteria for and aspect of the midbrain structures is that they lack awareness of a true species mind. As with the protoreptilian brain, the paleomammalian brain serves the purposes of evolution by enhancing survival, yet below the level of the species. It mediates the fight response of the individual and identifies that individual with its social group, while the basic brain seeks to maintain homeostasis within the individual organism. Since they fall below the threshold of a species awareness, the paleomammalian brain interactions may serve the purposes of the species through adaptation and reproduction, but they still represent microevolution. Like the protoreptilian functions from which they emerged, these midbrain functions are perceptually incapable of seeing wholeness at the level of the species or above.
These nascent beginnings of a social context and a social consciousness turn out not to be very social, or very forgiving. Most individuals identify with local groups which are subsets of the human race and of the total biosphere. In mammals, these groups, be it a pack or a troupe or a pride, will often seek to destroy members of their own species in other groups or other groups as a whole in violent competitions for territory and limited resources. This happens, for example, in wolf packs, even though wolves within the same pack are extremely loyal to one another. In part, this is how microevolution – evolution below the level of the species – operates. If the reader believes the description of intergroup violence does not extend to homo sapiens, simply look to the genocidal aspects of human history, and to current events. A random sampling should suffice.
The protoreptilian brain accomplishes the purposes of natural selection by advancing the interests of the organism. It aids the individual in surviving, defending a territory, consuming calories and hydrating, and passing on its genes, often at the expense of other individuals within the species. More developed paleomammalian brain centers aid in the perpetuation of social species by fostering the interests of the group, yet this is often at the expense of other groups within the species. This has worked well as an evolutionary scheme in the past, since adaptation often works through competitive processes. However, the enormous adaptive advantages with which human intelligence endows its species has thrown these intraspecific and interspecific competitions out of balance. This is, in part, due to the rapid cortical evolution of humans and their ancestors, which has occurred relatively recently and which is not well understood. Intelligence and its rapid emergence are thus, in some senses, maladaptive for humans, but only due to the lack of integration of cortical structures with the more primitive aspects of the brain as a whole. This can be seen in the case of individual human development, where lack of integration of executive functioning and impulse control in individuals, particularly male, under the age 25, often leads to maladaptive behaviors such as violence, aggressive driving and sexual misconduct. Yet in these same individuals, maturation after that age may be explained by greater integration of cortices with the more primitive brain processes.
The rapid onset of intelligence from a macroevolutionary perspective may carry the solution to its own problem, for in the neocortex of homo sapiens lies the ability, perhaps for the first time in Gain evolution, for a single organism to identify with its species as a whole, and for that same organism to contemplate wholes rather than parts, and particularly the Gaian whole. In other words, humans are perhaps the way in which Gaia knows itself. And beyond the Terran world, human beings may be the way in which the cosmos knows itself as well. The universe is, in this small sense at least, self-aware.
It is as if individual humans and humanity as a whole were likened to a cell in a gigantic body. The vast remainder of the body, consisting of less cognitively-evolved biological forms and of inanimate forms, is either insufficiently aware or not aware at all of the body itself. But the infinitesimal proportion of the body which is self-aware is beginning to leverage the body as a whole. It is spreading its own awareness to the entire body. This is a long, slow evolutionary process, yet once it begins, it is inexorable. In addition, it builds upon itself in an accelerated fashion. Knowledge builds on prior knowledge, so that the evolution toward this awareness accelerates through a self-reinforcing, positive feedback loop.
These concepts are related to versions of the strong anthropic principal, which holds that the cosmos must have those qualities which permit the development of life. Even bolder, in the participatory anthropic principle, Princeton physicist John Wheeler proposed that observers are required to bring the cosmos into existence. In the final anthropic principle, cosmologists John Barrow and Frank Tipler hypothesize that intelligent information processing inevitably arises in the universe, and once it does, it can never die out. This hypothesis is in direct contravention to some possible explanations for Fermi’s Paradox, which hold that it is probable that intelligence extinguishes itself or becomes extinct.
Neither the brain processes common to reptilian nor to nonhuman mammalian organisms can comprehensively look to the good of any of their species as a whole, much less to the welfare of other species or to the whole earth biosphere itself. These cognitive and reflexive mechanisms common to nonhumans are most often concerned with fight/flight, with individual aggression and conflict, or with approach behaviors such as sexual reproduction. The mammalian brains of social species are most often concerned with collective aggression and conflict between groups, or with identification with, at most, its parochial social group.
When the interests of the group are secured, the interests of the individual organism reassert themselves as paramount, at least until group survival is again threatened. Altruism thus works well when the group is threatened, but breaks down and works contrary to the selective process when the group’s needs have been met. Thus, when the group is ‘safe’, individual agendas return to the fore.
According to behaviorist schools of psychology, the ‘nonhuman’ portions of the brain shared with common ancestors can be described as operating under the influence of a stimulus-response cycle. A stimulus in the immediate environment elicits a response from a predetermined menu of options, and the primitive brain reacts with a choice selected from this menu. At first, choices are dichotomized between approach and avoidance goals, though it is also possible that a stimulus is regarded as neutral, as neither beneficial nor harmful. If the stimulus is regarded as potentially harmful, the primitive mechanism will either react with fight or flight. Flight is further divided into freezing, hiding or submitting, depending upon the nature of the threat, the sophistication of the organism’s behavioral repertoire, and evolutionary and counter-evolutionary developments such as the organism’s ability to camouflage itself.
As technological evolution is an outgrowth of biological evolution, artificial information systems augment this stimulus-response cycle in humans. These ‘artificial’ survival responses are sped up due to the delivery of large quantities of data to humans at the speed of light. Most of this received data has little to do with actual survival. Much of this electronically-derived information is not relevant to the here-and-now, and yet the stimulus-response cycle evolved to react to threats in the proximate environment with which the organism evolved. Not evolved to respond to time frames other than now or the immediate past or future, or spaces other than the immediate territory surrounding and claimed by the organism and its group, the base brain mechanisms of humans have few other reference points to interpret the more wide-ranging and complex scenarios with which this data confronts them. Their primal brain processes lack the context to deal with much of the data they receive through these artificial channels. The information is often ignored if the stimulus-response menu interprets it as neutral, neither beneficial to nor a threat to survival.
The electronic data may be simplified to comport with preexisting patterns. It may be filtered and interpreted as what is known. It may be interpreted as threat even though that threat does not exist, is contingent, or remote in time or place.
The cortical laminae do have the ability to interpret the subtleties inherent in these novel stimuli. They can distinguish gray areas beyond mere approach and avoidance, and can separate threats which are contingent or remote in time or place. Yet, as has been stated, these more evolved layers and lobes within the human brain are not fully integrated into the lower levels, and cannot always communicate with these primal mechanisms in a comprehensive way since the abstract languages of semantic and mathematical symbolism created by the temporal lobes are not translatable into the lexicon with which the more basic brain processes are familiar: images, other sensory data, instincts and emotions. Simple brain processes react to all stimuli with approach or avoidance tactics. This is a very limited, visceral response set. The less evolved brain aspects choose from the menu in their limited response repertoire to maximize individual and small group survival, selecting the reaction deemed most optimal given the scenario with which they are faced. This simplicity allows rapid reaction to stimuli, which provides a definite survival edge in response to selective pressures. Zebras can’t ‘think’ before running. Snakes can’t deliberate before striking. The basic brain structures, lacking the more evolved nuance of the highly-evolved brain, cannot distinguish between situations in which the organism’s survival is truly at stake, and those which merely present social or emotional challenges.
These limited responses are intensified by artificial information systems, which can access remote information concerning stimuli not immediately proximate to it, allowing the primal brain reflexes to respond remotely to such stimuli as if they lurked in the immediate environment. This was not how these parts of the brain were evolved to respond. Thy evolved biologically to protect the organism, and perhaps its small group, from immediate threats which existed proximately in time in its local environment, up to and including its territory. Its avoidance techniques – which involve fight or flight among other survival responses – could only cause proximate damage because these techniques acted through its own body and perhaps the bodies of other group members. When these parts of the brain developed, tool use had not yet evolved, for the most part. However, with the advent of modern instruments of surveillance, warfare, and other implements of aggression, primitively-directed survival responses can now cause remote damage on a massive scale, instantaneously.
Since the protoreptilian and paleomammalian structures perceive everything binarily as either a threat, an enhancement to survival or as neutral, they perceive even remote threats operating contingently far outside the immediate territory as actual or potential threats to the survival of the organism or its small group. And they respond accordingly, with either approach or avoidance, which may, as has been noted, include aggressive tactics. With the advent of remote surveillance technology such as satellites and drones, these approach/avoidance behaviors often work to address potential threats over remote distances. With the advent of the modern nation-state, the defined territory has increased far beyond that designed by evolution for even those creatures with the largest territories. This can lead to damaging consequences like war on a vast scale, since borders can stretch for thousands of miles and include many different state actors competing for limited resources such as water, arable land, and other valuable commodities. There are almost 200 nations in the world today, and the majority (88%) have standing military forces. This multiplies the odds of conflict, seems much greater than it ever was when fewer people lived in smaller groups with less territory and much more primitive weapons. There are nine nuclear states, and some of these are threatening war with one another as of this writing. The evolutionary system was not built to handle so many trigger points.
There are many examples of nuclear ‘close calls’, three of which have been mentioned above. Distant Early Warning radars assess for threats over the horizon. The ocean floors have been wired for sonar to detect nuclear-powered and in some cases nuclear-armed submarines. Stealth aircraft operate as a countermeasure to evade radars. Then radars were designed to detect stealthy aircraft. Satellites of every kind are deployed as remote sensors. Then planners engineered hypersonic missiles and directed energy weaponry which make some of these early warning systems obsolete. There are satellite and anti-satellite weapons. There is measure, and countermeasure. This military arms race is an outgrowth of the evolutionary arms race exemplified by evolution and counter-evolution.
Mass casualty events have, up to this point, been kept in check by the executive functioning of the highest order brain processes. The concept employed is mutually assured destruction, or deterrence. It works through the instillation of fear of consequences. Therefore, it utilizes the midbrain’s experience of fear, and the base brain’s interest in its own survival, combined with the forebrain’s knowledge of consequences. In this way, the three brain processes – protoreptilian, paleomammalian and neocortical – can cooperate with one another for good. But the structure of deterrence is perched upon a hair trigger, and only needs to fail once in order to prove its lack of value.
In fact, as of this writing, deterrence between the nuclear powers has broken down, as some powers have built first use of nuclear weapons as an offensive strategy into their military doctrines. This first use policy includes the use of tactical – or battlefield – nuclear weapons, the possible atmospheric detonation of thermonuclear weapons to create an Electromagnetic Pulse Weapons in a way that, it is gambled, would not cause a nuclear adversary to cross the threshold for the strategic use of nuclear weapons, which would destroy humanity.
Simply because nuclear or other mass casualty war has not occurred yet does not mean it is unlikely to occur in the future. Although the best predictor of future behavior is past behavior, we may exist on a planet of observers who have simply been lucky enough to avoid such conflict up to this point in history. One of the answers to Fermi’s Paradox may be that on other worlds, the probabilities of mass casualty conflict extinguishing intelligent species is greater than even, so that such civilizations have gone extinct prior to their ability to colonize other worlds or to send signals of their presence to intelligences on other planets. If this is the case, the prognosis for avoiding extinction of intelligent life on earth may be poor. As nonproliferation regimes fail and weapons of mass destruction (WMD’s) fall into more and more hands, the likelihood of mass casualty war increases over time. In less than a century, nuclear weapons have gone from one nation’s arsenal to nine. Chemical, biological and now cyberweapons are wielded by several world powers. These WMD’s threaten civilian populations. Even the possession of these WMD’s is destabilizing, especially thermonuclear and cyberweapons. An adversary may adopt a ‘use it or lose it’ mentality and use such weapons before they are knocked out, or before the same type of weapon is used against that adversary.
With the notable exception of proxy wars, great power war has been absent since the end of World War II. This may be due to the fact that humans have learned, through their neocortical processes, to restrain state actors and control the impulse for aggression due to fear of consequences from mass casualty events. As the implements of war have grown exceedingly efficient, humans see the costs of widescale use of even the increasingly deadly conventional forces at their disposal. However, although not a statistical argument, it could also be put forth that humanity is ‘due’ for great power conflict in the form of world war. The belligerents in such a contest would most likely be armed with mass casualty weapons.
It is a geostrategic truism that wars occur when perceptions of power shift. In ancient Greece, Athens was a rising power with a powerful navy, and Sparta was an established power with superior land forces. Sparta attacked Athens while it still saw itself as having superior military advantage, instigating the Peloponnesian War, leading to the decline of Athenian power. From this historical event, we have drawn the observations described as Thucydides Trap, in which a superior, established power may choose to attack a rising power before the hegemon loses its military advantage. According to some estimates, in the past 500 years when such a scenario has presented itself, there has been conflict in 12 of 16 occasions, or 75% of the time.
The current international order and its Western hegemons are being assailed by great powers which challenge the legitimacy of the Western liberal order. As of this writing, war has broken out between Russia and Western sponsored Ukrainian forces. Russia has explicitly characterized this conflict in these stark, Thucydidean terms.
Still, the dawning of a new, species mind, associated with the highest order brain, is asserting itself. This neocortex is larger than the more ancient structures combined. The recognition that the species as a whole, and its superbiome, Gaia, may not survive mass casualty warfare or ecological apocalypse has dawned upon humanity. In a way, the development of implements of mass destruction and global overheating may have reinforced the notion of a true species mind in humans.
The paleomammalian brain, roughly corresponding with the ‘seat’ of emotion, is highly integrated with the most primitive parts of the brain. These midbrain structures react with the emotion most commonly associated with threat: fear. Although it is not known whether reptiles and other creatures with fewer neurons experience any emotion, if they do, they most likely experience fear (associated with avoidance) and pleasure (associated with approach goals).
Fear may be an emotional expression of the survival response. It trips into cascade neurotransmitters and hormones which prepare the body-brain for one of the five survival responses described earlier. Fear is often experienced by humans as the secondary emotion of anger. It is a secondary emotion since it appears derived from fear. Anger is an emotional manifestation of aggression necessary for the fight response and therefore for survival. As has been noted, artificial information systems amplify these fight/flight reactions by providing more data at greater speed. The organism and its group can now perceive and react to remote events which have nothing to do with the immediate survival of the organism or its group. Yet the reactions of nation-states and transnational defense treaty organizations may indeed threaten the survival of the world with their responses to these perceived threats.
The reactions of the autonomic nervous system associated with the base brain operate beyond volitional choice, yet they can be conditioned so that they react not only to survival needs but also to stimuli which do not correspond to any known physiological requirements of or threats to the organism or its group. Originally designed to alert the animal to, and to prepare the animal for, disturbances in its immediate field of awareness in its instant environment, the autonomic nervous system can be conditioned to anticipate and pair certain environmental cues with certain instinctual and emotional responses. These autonomic reflexes can elicit physiological imbalance within the organism and, in humans at least, create emotions. An example may be an air raid siren designed to warn of a mass casualty event.
Reflexive drives and emotions can be paired with stimuli that cue the organism to perceive a threat where none really exists. This is one of the purposes of propaganda. Based upon the artificial data nets humans have created, information provided through websites may be presented or interpreted as threat which in fact is not a threat to survival at all. Many pogroms and genocides throughout human history are the results of just such propaganda, eliciting mass hysteria. In the Stalinist era, 20 million Russians were sacrificed to these purges. In China, the figure is closer to 75 million. These were both before the era of the internet.
Web-based information may provide a nonrepresentative sample of events in the remote environment to convince the perceiving individual or its group that such incidents are occurring in a more widespread fashion than they are in fact occurring. Or ‘facts’ may be presented or interpreted as disturbances in the environment which are happening now, when such threats are mere contingencies which may possibly occur in the future. With the advent of electronic cybernetic information systems, the propaganda value of data increases. This disinformation reinforces preexisting belief. The speed and quantity of data flows provide the opportunity for more powerful conditioning of the individual members of any society. Images may be Deep Faked through Computer Generated Imagery.
When presented as threats, the emotions elicited are always fear-based. Being fear-based, they are also survival-based, and so people pay them close attention. Data content may be political, economic or religious in nature. It may class or race-based or organized around ethnicity or any other small group affiliation. Yet the emotional and physical responses which occur automatically within the organism when it is stimulated by such content will be identical to stimuli which threaten actual survival. The stimuli do not have to involve direct threats to the survival of the individual or its group in its immediate environment in the present instant, and yet the emotional and survival responses will be identical to responses to stimuli which do threaten survival here and now. The neurotransmitter and hormonal cascades induced will be identical to those triggered by actual threats. This is due to the limited response sets available to the basis and midbrain structures. Fear, anger, aggression, and the concomitant adrenal responses will be identical, whether the stimuli is of a national, class-based, ethnic, racial, economic, political or religious nature, or whether the stimuli is a direct threat to the physical survival of an individual or its group.
Since any outgroup triggers a fear response in the midbrain’s amygdala, and since the primate midbrain can recognize faces, the human amygdala becomes enlarged when it is confronted with the faces of strangers. Class-based, political, religious, economic, racial and clan-based animus are thus often triggered by unconscious survival selections founded upon the limited menu of responses available to the limbic system. Propaganda and other disinformation campaigns exploit fear, which is associated with the acute stress response. The propaganda campaign prosecuted by Joseph Goebbels in Nazi Germany against Jews and other minorities provides a clear example of this strategy.
Social media hones and amplifies these tendencies by transferring them from individual brains to collective consciousness. Humans are a social species. The emotionally-driven midbrain developed as a survival mechanism to cope with this social aspect. The limbic system represents emotional and social intelligence. Add to this the partial anonymity and insularity which social media afford, and these electronic media can radicalize individuals by permitting the expression of socially-unacceptable content in anonymous or semi-anonymous form without the mediating effect of direct physical contact between individuals or groups. Insularity provides the perception of safety of expression without direct physical consequences. The fear of physical reprisal and social disapproval inherent in direct physical aggression is absent in social media. This allows cyberaggression. These media allow the unveiled display of abuse to be posted anonymously, which increases the probability of actual physical aggression at some later time, either between individuals or between groups. This leads to further social fragmentation rather than to a harmonious culture. Thus, media designed to share information actually has a fragmenting effect. The difference between state actors and social media is that social media are, to an extent, emergent, self-organizing, transnational systems. Affiliation with these smaller groups is chosen, and one individual may count themselves as members of many such groups.
This tendency toward removal of the fear of consequences for acts of cyberaggression extends from smaller social media aggregations all the way up to nation-states. Cyberwarfare itself, an organized yet veiled expression of this aggression, provides remote, instantaneous attack. It grants destructive power to often anonymous, untraceable actors. Due to non-attributability of cyberattacks (which may take the form of espionage, hacking, election interference, cyberpropaganda, ransomware or other forms of electronic aggression), the moderating effects of deterrence are removed or lessened.
Cyberattacks represent one aspect of hybrid warfare war waged below the intensity of overt physical conflict. Also known as grey war or grey zone warfare, hybrid war is war waged by ‘other’ means, and it has the potential to spill over into direct physical conflict, or hot war. At its most destructive, cyberwar can yield mass casualties by attacking power grids and critical software. By blinding the surveillance of a militarily capable enemy, cyberwar may lead to miscalculations and all-out war. Yet what humans fail to foresee is that their own primitive brain processes, whether they are protoreptilian or paleomammalian or the two acting in concert, drive this technological conflict. Although the means of destruction are designed and built by the higher order cortical brain, the implements of war, be they social media, cyber, conventional or nuclear in nature, are the ultimate possessions of more primal processes.
Informational Paralysis
The speed with which increasing quantities of data bombards the brain causes it to become increasingly overwhelmed. Too much data paralyzes. It prevents or delays informed decision, and makes rational choice less likely, not more likely.
Individuals cannot possibly process all of the data commoditized for their consumption, both due to its quantity and the speed with which it is delivered to them. The overabundance of data is a form of poisoning. It is data overdose. In the biological world, many toxins are a matter of dose. What is beneficial in smaller quantities has the power to poison in larger amounts. Since the informational world is an extension of natural ecology, humans should expect their reaction to an overabundance of data to be similar. Information itself is not cognitively toxic, but too much data can be.
One way the individual brain and groups with which that brain is associated deal with this data overload is by rejecting nonconforming data. Nonconfirmatory data is information which disagrees with previously formed belief or for which the mind has no context. It represents the unknowns. Sorting and prioritizing allow the brain to ignore background information. This protects the brain from cognitive overload, for if it did nothing but evaluate incoming information, it could take no action, and would become paralyzed by data ingress, especially since much of the data with which it is barraged is irrelevant to its survival or is mutually contradictory.
Rejecting or ignoring non-conformational or non-contextual data protects the organism from paralysis. Constantly examining the influx of new data of contradictory kinds, opinions and values prevents the brain from choosing. Multiple vantage points can eventually cancel each other out, and individuals receiving conflicting data may be unable to decide based upon that reciprocally repudiating data. The result is informational paralysis. This phenomenon was recognized decades ago by the U.S. intelligence community. The National Security Agency collected more signals intelligence – or SIGINT – than its employees could process. The NSA and other signals intelligence agencies lacked the human analyzers to interpret the data. Information channels backed up for long periods, or simply remained unanalyzed. A related problem was the compilation of graphs based upon this overabundance of data which contained so much information that they were unintelligible. The graphic representation of the vast reams of SIGINT collected, analyzed and represented was so confusing as to be meaningless.
Web-based ecologies contain so much data coming in from so many channels that it is possible to find data to confirm almost any opinion, theory or hypothesis. The result is a projective, cybernetic ‘echo chamber,’ where an inquiring individual’s or a group’s preexisting ideas about themselves, about others and about the world are validated and reinforced. The internet mirrors opinions. It reflects what is sought. This mirror effect reduces the efficacy of web-based platforms as sources of valid data.
A glut of information does not mean quality data. Information may not inform at all. Rather, it may disinform or misinform. Journalism, previously a fact-finding and fact-reporting mission, often devolves into opinion journalism in a multi-channel environment The proliferation of data sources in a multi-channel environment does not lead to a more informed populace. The multi-channeled information environment may lead to a less informed polity. This trend toward being underinformed results when issues are not explored in-depth. Many individuals sample newsfeeds and other platforms which aggregate news and other sources of information. In-depth treatments of issues which can be found in longer articles and books become a lesser share of total information consumed. Deeper reflection which engages the higher order brain centers is discouraged. A poorly informed populace is more easily manipulated through propaganda.
Humans have always been vulnerable to propaganda. In the age before the printing press, people were poorly informed. Yet in the age of mass marketing made possible through the advent of mass media, we have become more vulnerable to manipulation. Although these trends are not new, they are reinforced by the ‘dumbing down’ of the internet.
These vehicles for mass manipulation appeal to the primal brain centers; to fear and other emotions and instincts. They involve participation mystique. Participation mystique is heavily associated with the defense mechanism of projection, whereby individuals or groups ‘throw off’ internal characteristics onto other individuals or groups. In participation mystique, the projecting individual or group often has difficulty distinguishing their own subjective identity from the ‘object’ – the person or group – upon which associations are projected. It involves an unconscious identity with this object. Participation mystique is often experienced as a collective phenomenon in which projecting groups identify unconsciously with a person, group or other entity upon which primitive, base emotions or drives are projected. The projecting individual cannot distinguish himself, herself or themselves from the object upon which certain energies, emotions or desires are cast. In the case of misinformation, disinformation and propaganda, participation mystique often channels base fears and desires of a subliminal, negative nature. The material projected is often what is feared, but can also include other socially unacceptable emotions, ideas, images or impulses.
Technologically Reinforced Atavism
When too much information overloads the brain, much of it will remain unprocessed. Some of it may be sorted and ignored because the individual finds that it lacks context. However, simply because incoming data lacks subjective context to the individual does not mean that it is objectively irrelevant. Individuals may not agree with and may therefore ignore scientific data on planetary overheating or the severity of an infectious disease, but this does not mean that they won’t be affected by it.
The basic brain sorts stimuli according to physical proximity and very short time horizons, since it lives in a perpetual here and now. This local reality is confined to that portion of its territory which it can perceive and of which it can be immediately aware through its senses. Its time sense concerns itself only with that proximate time which immediately affects survival and perpetuation. It does not plan. Information relevant across longer time horizons or relating to the physical background is deprioritized or rejected.
Although the cortical brain takes account of longer time horizons and more remote physical spaces, this advanced brain process must also discard much of the time-related data it receives if it receives too much information. The same applies to data which may affect more remote regions of space not proximate to its immediate environment. The neocortical attention span will shorten when it has to metabolize more units of data at a faster pace. We refer to this data, which is disregarded because it is more remote in time or space, as noncontextual information.
However, some noncontextual data which the more advanced cortical brain deprioritizes may be quite necessary for survival. A massive volcanic eruption halfway across the planet may affect survival, as would a serious warning from a nuclear-equipped, peer competitor threatening to use its tactical nuclear weapons should certain events occur in the future. In prior extinction events which drove the macroevolution of the planet and its species, the source of extinction came for most creatures from a remote geographic source. Examples include the Siberian traps, volcanoes which in all probability contributed to mass extinction, and the astral bolide event near the Yucatan Peninsula at the end of the Cretaceous Period which contributed to the extinction of large dinosaurs. Most of the organisms affected by these events lived far from them in a relative sense. These triggers did not exist in the immediate environment of these organisms, yet the catastrophic developments were highly relevant to survival. The same can be said of the Little Ice Age, a period of cooling in the Northern Hemisphere which has been linked to the failure of crops, social unrest and increased mortality during the 17th century in Europe. Several causes have been linked to this cooling period, including changes in the earth’s orbit, decreased activity of the sun, volcanism, and changes in ocean currents. These causes would not have been seen as relevant by anyone alive at the time, and would most likely have been entirely discounted.
Noncontextual data may be ignored, but it may be material to human survival. Canyoneers in slot canyons may hike under sunny skies, with storm clouds threatening a mountain system dozens of miles away. Sensory awareness, and even logic, may inform them that the storm is irrelevant, yet hikers in such environments are sometimes drowned by flashfloods in exactly this type of situation. Noncontextual data may act over the longer term and involve more remote parts of the globe relative to certain human observers, yet it may be relevant to human survival.
Climate data serves as an example of this type of information which impacts human survival over the longer timeframe, yet which cannot be observed as causal over the short term. It is impossible to attribute a particular storm or even a drought to global warming. It is also impossible for humans to see that the burning of carbon-based fuels leads to planetary overhearing and all of the consequences that flow from the greenhouse effect.
Ignoring noncontextual data may provide one explanation for Fermi’s Paradox. Selection bias, or the observer effect, may tempt human observers into believing that life in general and civilization in particular will escape extinction, since both have survived up to this point. Yet we may simply be lucky observers who have survived despite the greater probability, over time, of being extinguished either by a natural cataclysm or through some agent of self-destruction, such as war or climate change. Humans tend to be oriented toward data which effects their present or their near term. Geologic time is measured in billions of years. Civilization is measured in thousands. Since we have been on earth in our present technological state for such a short time, and since mass extinctions punctuate many millions of years of relative stasis for lifeforms, we shouldn’t take comfort in statistics.
Nor, as we have stated, does more or faster data make humans more informed, more aware, more intelligent or more adaptable over the longer term. The contrary may be true. In order to further filter the oversupply of data in the informational environment, we have described the tendency for humans to ignore or reject information which is deemed inconsistent with preexisting belief, attitude or opinion. This deprioritizes noncontextual information. If an individual or group is biased against arguments that there is climate change caused by hydrocarbon fuels, they will tend to ignore data which confirms the existence of human-inspired climate change. The same would hold true for information which supports the relative mortality of Covid-19 or the efficacy of vaccines against it. When humans tend to pay attention to information which conforms to preexisting data channels, nothing new is learned. Individuals are not informed. Rather, they are reinforced. This is the primary value of propaganda. Paying attention to data streams which reinforce preexisting belief may decrease the odds of survival if individuals pay heed only to data relevant to here and now concerns at the expense of data which describe less proximate threats, or if the information channel reinforces preexisting attitude, belief and opinion at the expense of more objective information relevant to survival which disputes these beliefs. There is a prime distinction between fact and attitude, between objective information and subjective belief, and between evidence and opinion.
In order to protect itself from information poisoning, the brain ignores noncontextual data and reinforces confirmatory data that information which supports its preexisting view of the reality of its environment in the here and now. Time and regional horizons are shortened. This leads to a form of myopia, an ignorance which encourages fragmentation and a focus on immediate gratification (an approach goal) and the effects of conditions on the individual and its immediate social group. It reinforces identity as a member of a group with similar beliefs and needs, which the individual and its group perceive must be met now, often at the expense of other groups and their interests. This is evolution in operation, in response to selective pressures: overpopulation combined with resource shortages. Atavistic barriers are reinforced. The evolution toward awareness of a species mind, which is associated with the neocortex, is retarded.
The Principle of Certainty described in Hans J. Eysenck’s The Psychology of Politics deals with an information environment where there are competing influences, some of which support a given a belief, and some of which tend to disprove the rationale for that belief. When presented with contrary information which disputes a preexisting belief, those holding that belief will tend to hold their belief more strongly. When given evidence that their beliefs are false, those who tend to support a disputed idea will support it with more certainty. Presented with information which disputes their belief, individuals inclined to agree with that belief become more certain of it.
The problem isn’t so much the content of a given belief. It is in the structure of belief itself. To hold a certain opinion is necessarily to believe that one’s opinion is more accurate than contrary opinions, or the given opinion would not be entertained. To hold a belief or to hold an opinion is to believe that one’s opinion or belief is more accurate than and superior to contrary beliefs or opinions. This is the difficulty of a binary mind, and the brain itself is split. The base brain sees in very stark, binary terms of approach/avoidance. Stimuli are either conducive to survival or work against it. This dichotomy is translated by other brain structures into fear/pleasure. Even the cortical brain thinks and languages, casts and conceives, in terms of good/bad, right and wrong. The scientific brain sees positive and negative charge. There is a symmetry of particles and their antiparticles. The quantum world describes a dual particle and wave nature. The analogue computer, which evolved as an extension of biological intelligence, is endowed with yes/no circuits expressed as zeros and ones. Thus, we tend to conclude that we (and our group) are right, while others are wrong. We tend to think we see the truth, whereas others who disagree with us do not. This tendency applies to our scientific views, our religious views, our political and cultural views, to our worldviews.
Opinions, attitudes and beliefs are thus right or wrong, good or bad, depending on one’s point of view. Being subjective creatures, humans must have opinions, but when one individual elevates his opinions over others, he raises his belief to a certainty. He has transformed belief into truth, and fallen victim to the Principle of Uncertainty.
This right/wrong problem is reinforced in a multichannel data environment. The problem isn’t the dearth of information. It is the overabundance of data. It isn’t the quantity of information. It is its quality. When there is so much choice about the kinds of information humans can acquire, we find it ironic that most select only that kind of information and through those channels which confirms preexisting opinion. When humans have a choice about what kind of information they will receive, they will tend toward those channels which reinforce prior belief. This is done in part in service of the filtering function in the overloaded information ecology previously described. In a many-channeled data environment such as the one in which humans find themselves, this creates information camps of likeminded believers. This in turn makes digital purges more likely.
Eysenck, referenced above, found that members of more radical groups entertain their beliefs with more conviction than those who hold more moderate opinions. These tendencies have been confirmed by more recent experimental observations. In addition, the cognitive ‘hardening’ of incorrectly-remembered events into ‘truths’ reinforces this perceptual fixing process. Providing accurate information to correct these imbalances does not in fact supply corrective feedback in most cases, as the accurate information is thrown out or is made to perceptively and recollectively confirm the preexisting mental model formed by the brain. Facts are changed to fit the mental mold, rather than adapting the cognitive model to fit new information. Remember that there are many more connections between the brain’s neurons than there are sensory inputs relaying data from the external world. This means that the mental models created by the brain may be sharper and more resilient than the data relayed from the brain’s environment. This affects how the brain perceives what it takes in. Attitudes are the filters through which information is taken in, coloring the facts themselves.
Eysenck described experiments which artificially divided groups based on a capricious membership requirement, such as being assigned to one team rather than to another. The interactions between these randomly-assigned teams quickly devolve into factionalism and aggression, based solely upon team loyalty. This is so even if members of the ingroup and the outgroup had previously enjoyed amicable relations, spoke the same language and were identical in their ethnic, religious and class affinities. This tendency is anecdotally observable in fan loyalty to professional sports franchises.
These atavistic impulses are only amplified by nationalistic, class, religious, political, economic, ethnic or racial divisions. The tendency of data to be instantaneously available in a multipolar world only reinforces these innate tribalistic tendencies. A surfeit of information has led to division, rather than unity.
Assumptions Regarding Machine Intelligence
Central processing units serve as augmenters of the human brain, providing additional computational and storage capacity. They provide faster data processing and access to more data at accelerating rates. Transhumanists assume that this prosthetic intelligence will be able to solve intractable problems. The transhumanistic assumption is that such accelerations in information processing yield an increase in the fund of knowledge and to methods of problem-solving that exceed and transcend raw human intelligence. Many believe that AI will offer nonconventional and counterintuitive solutions to insoluble problems faced by civilization. The assumption is, in part, that increased processing speed and a vaster fund of knowledge will improve the odds of human survival.
Machine learning involves this accelerated data processing. It is a trial-and-error process greatly accelerated past the pace of human learning. Intelligence is an emergent phenomenon. It adds something more. It is not reducible to data processing and computation. Intelligence is not simply a series of operations. Intelligence is not merely speed. It is not just the ability to process volumes of information.
Machine ‘intelligence’, for lack of a more accurate term, lack creativity. What their inventors failed to impart to AI is this added layer of creative capacity. BI possesses a qualitive edge which sheer memory and computation – the ability to process, transmit, receive, store and reconfigure data – lack. The creative capacity of true intelligence adds something more: an emergent property called mind.
AI is more readily analogous with the physicality of the brain. Mind, in contrast, is an abstract concept. No one can identify where the mind is located, though we all know where the brain resides. In the same way, AI has a location. It is housed within a computational device. Mind is qualitative. AI is quantitative. Mind can be intuitive. AI is mathematical. Beyond this, a fundamental difference between humans and AI is that humans are innately, and often intensely, curious beings. By its nature, AI is not curious. It does not ask ‘why’. And even if it could, since humans have been unable to satisfactorily solve certain problems – such as poverty and inequality – for thousands of years, how can they expect their AI inventions to come up with adequate solutions? It is unlikely that AI will offer a truly qualitative difference which distinguishes it from BI.
Yet even if AI did possess the creative, intuitive aspects of intelligence found in the emergent phenomenon of the human mind, we still hew to the conclusion that AI will not provide solutions to the puzzling intractability of human problems which threaten the survival of our species. Again, if the emergence of human creativity and intuition have been unable to solve these problems, why should we expect machine intelligence to be able to do so?
Another problem AI may confront concerns its capacity to deal with the irrational aspects of humanity, and how these prelogical, human qualities contribute to intractable problems which are contributing to Gaian collapse. AI may never be able to understand unconscious, irrational processes, which are responsible for many of the excesses of civilizations. The irrational and the unconscious are, in part, what make BI qualitatively different than any purely mathematical operations which serves as the basis for computational intelligence.
Considered as a collective phenomenon, the creative faculties of the mind are noogenetic. Noogenesis is an emergent property. It is coextensive with the biosphere and has been referred to as a kind of collective reasoning or collective consciousness which evolved out of the planetary biosphere.
The very existence of the noosphere – the idea that the biosphere, itself evolved from the geosphere – transcends the ideas posed by a transhumanist philosophy which would place AI in the central role of saving humans from themselves. Developed primarily by Russian-Ukrainian scientist, Vladimir Vernadsky, and French paleontologist-philosopher, Pierre Teilhard de Chardin, the noosphere is related to the idea of Gaian evolution. The noosphere is considered the most advanced state of evolution and represents a planetary consciousness which continues to evolve.
The constructal law, which has been framed as the universal tendency of natural and artificial systems to distribute mass and energy through increasingly efficient networks, is related to Teilhard’s interpretation of the noosphere, which he regarded as an extension of geological and biological processes. Other thinkers such as C. Lloyd Morgan have reasoned that evolution of the mind, as an entity possibly distinct from the brain, had emergent properties. Foreshadowing the punctuated equilibrium later described by paleontologist, Stephen Jay Gould, Morgan believed that evolution occurred in fits and starts, and that the evolutionary process toward increasing complexity began to accelerate with the advent of civilization and technology. This idea, in turn, can be seen in the constructal law, which predicts that larger and faster vessels in the natural world (eventually extending into the artificial world) will transport greater amounts of matter and energy all over the planet with greater efficiency.
True intelligence involves creativity and intuition, ways of knowing which extend beyond the realm of purely reductive inquiry or computational capacity. The emergence of a cosmos which can contemplate itself – a self-authenticating fact – dovetails with the strong, the participatory and the final anthropic principles previously mentioned. These variations on the anthropic theme hold that observers are required to create the universe, that intelligent data processors must inevitably evolve somewhere in the cosmos, and that they will avoid extinction. These anthropic themes run contrary to some of the usual solutions proposed for Fermi’s Paradox.
Although some AI applications mimic emergent behaviors, such as computer-generated animation techniques which replicate the flocking flight patterns of birds, it must be questioned whether AI is truly more or qualitatively different than the sum of its ‘parts’. Some transhumanists assume that AI will provide solutions to problems to which BI has yet to provide answers. This may be true. Yet it is unlikely that AI can provide ultimate solutions to more insoluble human problems which we regard as central to the problem of human extinction. AI is based on logic and on mathematical operations, and when applied to the prelogical aspects of the brain which drive human behavior, technologists look to mathematics to solve problems which are not inherently rational or mathematical in nature.
For instance, some have proposed that the poverty of the masses arises because of the greed of the few, who hoard resources for themselves. Yet poverty also appears to arise in part from resource scarcity. If AI were to solve the problem of poverty, it would do so rationally by addressing avarice. Yet avarice is not rational, and it does not always emerge from scarcity. Greed is not inherently quantifiable. AI’s mathematical operations are quantifiable, but do not represent a qualitative leap from human intelligence. Increases in quantification cannot solve human problems which are essentially qualitative in nature, and we hold that poverty has qualitative causes in addition to quantitative etiologies such as resource scarcity. Unless it can address the problem of avarice, it is unlikely that machine intelligence will be able to come up with an answer to the problem of poverty. Why there is poverty may essentially be a philosophical query, and not a scientific one. By way of analogy, the problem of many communicable diseases was a scientific one, and it was solved though the introduction of vaccines, antibiotics and antiviral drugs, along with other public measures. Therefore, if problems such as penury could be solved scientifically, BI would have arrived at a solution and implemented it long ago.
What causes the ecological excesses of civilization is human behavior. What drives human behavior is often irrational. Who could argue that violence, addiction, resource hoarding and other human excesses are not at least, in part, motivated by irrational processes? And yet, machine intelligence does not have access to these purely illogical human drives and human behaviors. AI can be programmed to be irrational, yet it can never truly be so. Without access to these deeper and often darker and more profound human motives, machine intelligence may be unable to get at the root causes of human excess.
Even if AI could become mentally ill and unbalanced in the way that humans often are, we must ask ourselves whether this would be a constructive development. In contrast to the conclusions reached about it in a substantial body of science fiction literature, we do not believe that AI can become volitional, self-aware, and purposefully destructive to its human creators. However, if it were endowed with human irrationality, it could turn into Skynet, the malignant artificial intelligence envisioned in David Cameron’s Terminator series. In this case, the evolution of machine intelligence would itself be the reason for Fermi’s Paradox.
Through AI, some scientists hope to arrive at scientific solutions to fundamental human difficulties. However, we believe that solutions to vexing problems such as poverty, lack and attendant suffering may transcend the quantitative nature of the spacetime continuum. Since AI is an extension of biological evolution, it is a part of this continuum, and is unlikely to arrive at solutions to problems and answers to questions which are, at least in part, philosophical in nature.
More broadly, when answering whether AI will be able solve fundamental human difficulties such as poverty, lack and attendant suffering, it is important to look to human experience for the answer. Human intellect, of which AI is in some ways an adjunct, may conclude that it can provide answers to intractable problems, yet human experience contradicts this claim. By examining experience, it is possible to arrive at a more accurate conclusion as to the types of problems any kind of intelligence can solve. Even if we assume that AI is analogous to BI and that it does exhibit emergent properties with the same capacities for creativity and inventiveness which humans possess, how can it resolve the problems besetting humans when human have been unable to do so? If human intelligence has been unsuccessful at solving certain intractable problems, it is not likely that artificial intelligence will, ultimately, be any more successful.
Intelligence appears to be an adaptation with which natural selection has ‘favored’ the species homo sapiens over other species. Yet does imbalance ever favor any one species over the longer term? The standard model of evolutionary theory assumes that evolution has no inherent direction or goal. Since Darwin’s time, evolutionary theory has reinforced the Copernican Principle that humans are not especially privileged observers or specially favored in any way. Instead, intellectual capacity has created massive overpopulation, ecological damage via pollutants on an apocalyptic scale, resource depletion, ever increasing nationalism and movement toward war, epidemic addiction, anthropogenic mass extinctions, and climate change. Intelligent solutions have created other unintended consequences, and merely delayed the day when these problems must be faced. The simple problem of overpopulation is an example.
More fundamentally, being the product of a finite, entropic system composed of the same fundamental constituents as BI, and being a part of the METS continuum, AI is limited by this system. Being a product of the ‘box’ of spacetime, AI solutions are confined to the operations which that box can produce. It will be constrained to ask only those questions which it occurs to AI to propose, and these will always necessarily be limited to the questions its human inventors can pose. These, in turn, will always be constrained by finitudes. The questions of the etiologies of human lack and suffering may best be asked of disciplines which practice a nonscientific epistemology. Even if AI’s reasoning is superior to or faster than or qualitatively different than BI, perhaps reason itself is not up to the task. Maybe rationality is insufficient to solve problems which stem in part from irrationality and irrational causes. Perhaps we ask too much when we task AI and the science which gave rise to it with solving problems which BI has been unable to solve on its own. As we have stated, a technological solution to a technological problem often simply postpones and re-expresses the problem, which is experienced at some later time in some other way. For example, industrial processes provide cheap, widely available finished products and ubiquitous substitutions for human labor inputs. Yet they also yield waste on a global scale, and therefore leave behind tremendous ecological consequences as their byproducts.
The Incompleteness Theorems of Kurt Godel show that no system of mathematics can prove all its statements from operations within the system itself. Moreover, no mathematical system can show its own internal consistency. Alan Turing’s Halting Theorem holds that no computer program will ever be capable of completely comprehending itself. Computers cannot reliably show that the data they produce is either valid or reliable. They can yield algorithms, but they cannot validate their own algorithms. These fundamental limitations of logic, technology and what any system is able to evolve and understand from within itself show the false promise and futile hope of solving humanity’s and earth’s problems through human and AI ingenuity.
This computational ignorance – the inability to form a total a picture, a holistic understanding – is paralleled by the fact that humans are not able to wholly understand themselves and their own motivations. AI is an outgrowth of BI and technological development will roughly parallel biological evolution. Humans are products of an entropic system, and attempts to understand themselves or any part of that system from within it, will always remain incomplete and result in a certain amount of ambiguity, of uncertainty in measurement.
This Indeterminacy Principle (not to be confused with the Principle of Certainty formulated by Hans Eysenck) points to an inherent unknowability, a gap in human measurement and knowledge of the universe. It is applicable to any object which humans seek to measure within the cosmos, including the human brain and the more abstract concepts of the mind and human motivation.
On a fundamental level, due to the existence of the conservation laws and the first and second laws of thermodynamics, the problem is one of finitude. An infinite, open system does not need to address the problems of imbalance, since those occupying such a system can simply move on from the imbalances they create in regions where they have exhausted resources into newer regions. In the past, whenever and wherever they could, human populations did just that. They moved onto new continents. Yet the Age of Exploration is over, and we have exhausted ourselves of land. The experience of literal apocalypses of island ecologies in places such as Rapa Nui shows what happens when people can no longer colonize fresh territory.
A closed system is subject to the Second Law of Thermodynamics, which states that entropy (disorder) always increases. The First Law holds that when energy is introduced in a closed system or when it escapes that system, that system’s energy increases or decreases consistent with the law of the conservation of energy, where energy can neither be created nor destroyed. Taken together, the First and Second Laws mean that it is not possible to produce work, in the sense physicists use the term, without expending energy.
Any isolated system which conserves matter, energy and data seeks to balance these limited aspects. They will seek thermodynamic equilibrium. The problems of imbalance must be confronted. These problems cannot be permanently averted. The matter-energy-data ratios in a system which obeys the conservation laws will always seek balance. In a system where energy, matter and information are conserved, this culminates in heat death, and the even distribution of matter-energy-data throughout the system. Humans observe that these conservation laws to hold true. They may be local observations of the proximate region of spacetime which humans can measure from earth, but we must extrapolate them as holding true for the larger universe as a whole. According to the holographic principle, the physics inside a bounded region wholly expresses physics at the boundary of that region. This is what we observe. This is what we know.
Since all physical processes are irreversible, the lessening in thermodynamic potential means that, in an expanding universe such as ours, the scattered energy seeking even distribution is unrecoverable. As the arrow of time has progressed, there has been a cooling of the materio-energetic structures in the expanding universe. This has resulted in a temporary decrease in chaos and a concomitant increase in complexity, which has allowed life to evolve. Yet we hold this development to be temporary. As the cosmos continues in its expansion, it will become increasingly difficult for lifeforms to maintain their organization in the face of an ever-increasing, and inexorable, entropic decay. It will be difficult enough for very simple biological forms to maintain themselves over time. It will be even more difficult for more complex lifeforms like ourselves to sustain their levels of organization. This is why relatively simple creatures dominate the planet in terms of number, diversity of speciation and sheer biomass. It explains why the simplest species have remained relatively constant and longer lived as species than more complex clades of lifeforms such as dinosaurs or mammals.
It will be even more challenging for sophisticated ecosystems to maintain the energetic levels necessary to sustain their sophisticated structures. The more complex the biome, the more difficult this maintenance will be to achieve. For the increasing technological complexity of human civilization, this ‘balancing act’ will be most difficult, for our technical systems require the most energy to sustain themselves. It is why humans are heating up the planet.
In any event, even if the universe as a whole was not subject to heat death, the Gaian system is. It is subject to the conservation laws and thus to the imbalances which humanity as a whole is now running up against. The atmosphere has an upper limit. The oceans have bottoms and shores. The earth’s mass and its biomass and its resources are all finite, and thus subject to the conservation laws and to imbalances. The earth, it turns out, is much more like an island ecology than a continental landmass. Civilization represents an imbalance introduced into this island ecology, just as people did when they established a relatively complex civilization on Rapa Nui. Human technology is like an invasive species, or a series of such species, which threatens to overwhelm the naturally-evolved endemic species in much the same way that placental mammals replaced the endemic marsupials on the South American landmass once that island continent became connected to North America through the Central American isthmus.
Civilization represents a discontinuity within the larger Gaian system. It will seek to perpetuate and enlarge its own existence at the expense of the greater system. The larger Terran system, naturally militating towards a homeostatic state, pushes against this technological imbalance and seeks to keep it in check. Thermodynamically, nothing within any closed system can exist without an offset. Any energy used has a waste product. Technology cannot accomplish any work, in the sense physics uses the term, that is consequence-free. For these very basic reasons, machine intelligences will not prove the panacea that transhumanists hope they will.
The Fallacy of Human Solutions
Ecological problems, systemic financial problems, poverty, disease, addiction, war, and inequality are all ultimately unsolvable through pure human intellectual capacity, and by extension through AI. The proof is that they have not been solved. Computational capacity has increased exponentially, and yet all of the ‘ancient ills’ listed above remain. Experience provides the proof. Some may believe that they haven’t applied the proposed solutions with enough fidelity to the models they invented; that they have up to this point lacked the will to apply such solutions with sufficient rigor. We believe that these arguments do not hold up.
Civilization cannot destroy avarice or aggression, and we have pointed out why: these are deep-seated parts of the psyche, of human ‘nature’, if you will. They are driven by primitive brain processes which have been operating below the level of consciousness for many millions of years. To excise these drives results in excising what it means to be human. It is a lobotomization of human creativity and choice. Political and economic systems which have attempted these excisions have resulted in more death, poverty and starvation.
Yet even if it is true that humans have not applied their own politico-economic models with sufficient fidelity and rigor to eliminate poverty and hunger, nothing in human history suggests that the needed human will shall be supplied with enough thoroughness and consistency to affect an elimination of penury and hunger in the future. Whenever such absolute rigor and fidelity have been attempted, it has not yielded good results. Instead, it has resulted in fanaticism, war, revolution, purges, and genocides. These have sometimes been religious, yet at other times political. They have sometimes been racially or ethnically motivated, but at other times were based on class. These very calamities are, in fact, occurring on the planet right now, despite the overwhelming technological superiority that humans demonstrate. Therefore, it is not likely that purity of application of any one social, political, religious, moral or economic model has been the reason why human problems have not been irradicated, since rigorous application of any one method seems to create more problems.
Our thesis is this: intelligent solutions to most problems merely re-express the original imbalances which the intelligent solutions created. Each solution devised simply delays the ultimate expression of the original imbalance. For example, in struggles of class warfare, the new authorities eventually redistribute the economic spoils to themselves. The world doesn’t become fairer. The unfairness is simply redistributed to someone else.
The intractability of the problem of human suffering does not appear to be a lack of full application of the human will in implementing any solution. We suggest that the problem may be human will itself. The problem may not be a lack of conviction in the rigors of the intellectual methods previously employed. The problem may involve the absolute certainty placed in these methods.
Humans can vaccinate children against disease, and thus rid themselves temporarily of the problem of pestilence. Yet the imbalance is then expressed in terms of overpopulation, which leads to resource wars, poverty, and disease all over again in a few generations. The original problem of pestilence is amplified because of overpopulation. Malthus was early. He was not wrong. This was the problem of the caribou on St. Matthews Island. The difference between the caribou on the island and humans on earth as a whole is that earth is obviously bigger than the 138 square miles of St. Matthews Island. Humans believe they can apply intelligence to delay this final reckoning, while the caribou on St. Matthews could not. Yet those caribou were an intelligence, too. Though humans may be able to postpone the final accounting, they may not be able to do so indefinitely. This is one of the answers to Fermi’s paradox. Humans were unable to solve their problems through intelligence on Rapa Nui. This was not an isolated case. One may look at the civilizations of the Anasazi in North America, the Mayans in Mesoamerica, and the Greenland Norse as just a few of many examples where, despite relatively sophisticated technology, societies were eventually unable to sustain themselves.
What all these cultures bore in common was overpopulation and overexploitation of resources. It is easiest to see the reasons for systemic collapse on islands, such as Rapa Nui. The earth is an island as well, existing in a four-dimensional ‘sea’. Like an island, the earth is an isolated system of limited resources. We say that it is isolated because it is semi-closed, with a protective atmospheric membrane which allows a limited exchange of light and gas with the larger universe which lies beyond it. Similar protective factors enabled islands to develop unique endemic ecologies in isolation, yet these self-sustaining biomes did not shield many of these ecologies from localized apocalypse. The recent ecological histories of many subantarctic islands reveals that local bird species were often wiped out by the introduction of cats and rats from the ships of whalers and sealers.
Civilization, like the living creatures which create and maintain it, has up to this point required the consumption of carbon-based forms to survive. Yet to the extent they are locked in fossil fuels, which make up about 80% of human energy consumption, carbon sources are nonrenewable. They are limited. Eventually, they will run dry. Based upon the weight of the historical record, civilizations extinguish themselves before they reach the stars. This is the single answer to Fermi’s Paradox at which we arrive. The physicist, Stephen Hawking, opined that unless humanity developed the ability to establish self-sustaining, off-world colonies soon, humanity would fulfill this answer to the Paradox.
The Fundamental Problem of a Closed System
It is obvious that Gaia is both overpopulated and overused. Disease may simply be a way that the Gaian system culls populations of any species and keeps them in check. Humans, who tend to see themselves as the measure of all things, find this intolerable. They invent cures to disease. Yet disease remains a problem. It is simply that different diseases affect the human population than those which plagued it in the past. In addition, human populations have burgeoned as the result of mass vaccinations, the wide availability of antibiotics and antiviral drugs, and public sanitation, creating the opportunity for yet other diseases to take hold. We have more people, yet more poverty. More people mean more environmental impacts, such as resource scarcity and pollution. Our own extended lifespans simply lead to many other disorders and afflictions of old age which were not problems before. Human suffering remains net the same as it may have been a thousand years past. The particular manifestations of this suffering seem different.
Human solutions to the problems of resource depletion and pollution are similar. Consumption produces waste. Waste is often environmentally toxic. New ‘clean’ energy sources are promised, yet the newer forms of energy continue to produce toxic waste, if not from the consumption of the resource itself, then from the industrial processes and byproducts needed to mine and manufacture the components which harness the renewable energy supply. The problem is that humans, as organisms, must consume. Consumption creates byproducts. This cannot be avoided. Humans must consume carbon. As heterotrophs, they must consume carbon-based organisms to survive. Yet the supply of carbon is limited. The conservation laws hold that the amount of energy in an isolated system remains constant and must be conserved over time.
Technological civilization requires energy beyond that required by subsistence level cultures. This superfluous consumption becomes an added ‘tax’ upon the biosphere. The more the people, the larger the civilization and the added burden upon the environment. The more technologically advanced that civilization, the more the burden on the planet increases yet again. When these two factors – technology (expressed as standard of living) and population size – are multiplied by one another, the burdens upon the natural systems which serve as the bases of civilization increase dramatically. This results not only in accelerated rates of resource depletion, but also in sped up rates of accumulated toxic byproduct, which are correlated. The effects of this depletion and pollution may be postponed, but they cannot be avoided. Even an efficient recycling system will have leakages and other byproducts, and humans have not found recycling methods which can offset the waste produced by their many industrial processes and technologies. They may be able to disguise or store these waste products, but eventually, these hidden costs must be accounted for and paid.
When nuclear power became widely available to industrialized nations, it promised the boon of clean, cheap energy. Yet it yields waste products which are difficult to dispose of and to recycle. Reprocessing reactors can recycle some of this byproduct, but not all of it. There is still waste left over. The conservation laws hold that this will always remain true for isolated systems, and a perfectly efficient recycling process has not been devised.
Population and civilizational sizes have grown to such an extent that natural biomes have become mere appendages to civilization which exist to service the needs of a given society. The natural environments of which the noosphere is composed can no longer successfully detoxify the planet or support civilized culture.
Intelligent solutions do not often rectify the imbalances created within civilization itself. Human wealth building efforts create massive inequalities and economic serfdom for the masses worldwide. Solutions to economic inequality yield inefficient redistributive systems vulnerable to corruption and the crushing of yet other freedoms. In both types of politico-economic schemes – capitalist and Marxist – system failure may be due to faulty design, but is more often the result of the human tendencies toward greed, fear, abuse of power, the desire for control, and corruption. These attributes, which are re-expressions of the avoidance and approach goals of primal brain processes, fall outside the province of intellectual capacity to control for. They are the survival responses of more primitive brain structures.
Economic justice policies attempt to correct injustices committed against exploited and oppressed groups, which have resulted from episodes of unrestrained wealth-building. Imbalances created by wealth accumulation are not ultimately solved by this redistribution, since economic imbalances always recur and are often magnified in redistributive systems. Regulatory schemes are not very successful in eliminating economic imbalances, often exacerbating the very behaviors sought to be moderated or prohibited. They also frustrate decentralized, emergent solutions which may prove the most efficient. In the 20th Century in North America, the war on drugs was largely judged to be a failure despite the expenditure of billions of dollars and much human effort in prosecuting it over decades. Illicit drugs are as much if not more of a problem than ever, and in fact some controlled substances have been legalized since illegal importation created even more problems than use of the substances themselves. Political solutions to imbalances created by civilization fail because the problems are merely postponed, redistributed or re-expressed.
The tendency of the laws of the conservation of energy and mass are re-manifested in political and economic systems since those artificial systems are extensions of the physical world. There is no such thing as a perfectly efficient engine, and bureaucracies dedicated to wealth redistribution, equality or the elimination of social ills will similarly never be 100% efficient. As time passes, like any other engine, they become less efficient. They are complex systems. When complexity unravels, it is difficult to reduce it since various constituencies may defend it. Yet as it burgeons in size, a bureaucracy becomes more costly to maintain. For this reason, the mission of any bureaucracy (the welfare of those the bureaucracy supposedly serves) is eventually sacrificed. The system then exists increasingly to serve the needs of itself and of those who operate it. A large organization is designed to become larger. Its budget increases proportionately as it grows in size, scope and complexity, yet this very process guarantees that its productivity and efficiency decrease. This trend can be seen in government, health care, education, and the corporate sectors of developed nations.
The transhumanist movement called equalism, a political and economic theory which holds that emergent technologies can and should and shall end all socioeconomic inequality through the equable distribution of resources through a technological singularity, is simply a re-expression of this age-old idealism which has always failed in the past. It fails due to the problems of complexity and the inefficiency of engines. The human tendencies toward corruption, avarice and fear also contribute to this failure.
On the other end of the spectrum, social Darwinism applies the selective dynamic of ‘survival of the fittest’ to the human world. Though it should not, perhaps, be celebrated, this does not mean that the Darwinian dynamic is not operant in the human world. Basic patterns and the rules which govern them tend to recast themselves from one stratum to the next. This is the essence of the constructal law. Thus, natural selection seems to extend from the world of biology to the purely human world. In reaction to the inequalities of social Darwinism, political and economic systems arose through revolutions, yet these systems simply re-expressed the Darwinian dynamic and created other inequities. The lack of restraint on basic and midbrain impulses is recognized by the higher order brain as destructive. Yet when social controls are imposed upon these appetites and emotions, the controls produce yet other imbalances. Moral behavior cannot be imposed. Equality can be encouraged through policy, but it cannot be mandated through legislation. Every attempt to force equalization of result has failed. If equality is to arise, it must do so naturally as the higher cortical brain integrates with its lower order correlates. This can be fostered, but it cannot be forced.
Viewed most simply, the problem – whether it is framed as ecological or as one purely within the human world – is always one of imbalance. An imbalance in one layer of the Gaian system, such as in the geosphere, the biosphere or the noosphere, has knock-on effects in another layer of the system. It is the original imbalance which must be rectified, since all imbalances stem from the first one. Correcting secondary imbalances such as economic inequality is simply an amelioration of symptoms, and not the cause of symptoms. In order to address economic inequality, for example, most attempt to raise the human standard of living, without addressing other factors like overpopulation. Yet we have shown how human overconsumption simply adds further imbalance into the biosphere, creating a lower ‘standard of living’ for the nonhuman world as well as for humans alike.
Intelligence alone has been unable to restore human and environmental balance. In attempting its solutions, people are most likely to ‘double down’ and utilize human will. In this, humanity has attempted to impose the same intelligent solutions over and over again. Yet human will has failed. Its one instrument – control – has failed repeatedly to impose order (balance) within human systems or outside of these systems. One need only examine the evidence of human history to see that this is so. Intellect, as marshalled by the will, can push the problem around, and so they can rearrange it. They can disguise the problem. They can postpone the problem. They can express it in other forms. Yet, in the end, the problem of imbalance always remains for the simple reason that physics cannot be cheated. BI has been unable to solve the basic problems described by thermodynamics and the conservation laws. Transhumanist solutions will also most likely fail, and perhaps create additional problems which accumulate and back up within the Gaian system.
Planetary Balance
Civilization and its artifices are based on intelligence. Humans see their civilizations as ends in themselves, or perhaps as a means of serving and gratifying humanity, which is seen as the penultimate species produced by evolution. Yet Gaia may have evolved the BI which gave rise to civilization for other reasons. Humans tend to see themselves as privileged centers. This self-centered tendency has been present since the beginnings of human culture, when many tribes tended to place themselves as the central people in their creation myths. It is also the tendency of each individual in its infancy to see and experience itself and its needs as central and primary.
Yet according to the Copernican principle, humanity should not expect to see itself in any particular position vis a vis the remainder of the universe. Neither the earth nor the sun are central, though in Western thought, each was once thought to be. If the constructal law holds true, then perhaps BI evolved in order to foster greater efficiencies in the transport of matter-energy over longer distances at faster speed and in larger quantity. If physicist, John Wheeler, was correct in his hypothesis that at root, the universe is composed of information, then every particle, field and even spacetime itself are information carriers. What the constructal law describes, then, is the evolution of systems which transport information to attain a greater balance of that data throughout not only the Gaian system, but throughout the cosmos as a whole. This tendency is central and fundamental. The Second Law of Thermodynamics, re-expressed in terms of data, is the inevitable tendency of a closed system to redistribute information until it is evenly balanced throughout the system. Since the Gaian system is at least partly a closed structure, we conclude that it has this evolutionary tendency. If the universe is similarly closed, it will eventually evolve to a balanced informational environment. We hold that the universe seeks a balance of information.
Although it was secondarily evolved as an extension of primary, natural systems to foster this central tendency toward equilibrium, civilization has accrued imbalances. Technological systems foster imbalances of matter-energy-data throughout the Gaian supersystem in favor of a single species.
We conclude that matter is information, or at least that it acts as an information carrier. Energy, being an equivalent of matter, also acts as a carrier of information. Properties such as charge mass are indictors of the quantity of data which matter-energy systems like subatomic particles, atoms, molecule, can store, carry and transmit. Humans subsist in an informational environment as much as they live in a physical environment and a biological ecology. Information, as arrayed in the human brain, is represented as consciousness, as awareness. This consciousness is an emergent property which is not present in the individual neurons which make up the brain. More than that, this consciousness if self-conscious. it is self-aware.
It is necessary here to bring in myth, and the science of myth, to describe this developmental process as one which transcends a merely physical or biological series of developments. As the level of quantum mechanics needs to be linked with the higher order macro world of larger celestial phenomenon, so, too, does the physical world of particles and atoms require a linkage with the higher order human world of consciousness and self-awareness. Myth and psychology can provide us these links, these analogues.
In Greek mythology, the primordial world was considered chaos, the abyssal void state which preceded the emergence of the phenomenal universe. Parallel ideas arise in Christian, Middle Eastern, Egyptian, North African, Roman, Polynesian and Gnostic traditions. Medieval alchemy describes the prima materia. It is here that psychology can provide a direct link to human self-awareness. C.G. Jung adapted alchemy to the process of psychological transformation and likened the prima materia to a kind of raw, unformed psychological energy which he identified with the uroboros. He conceived of the development of human self-awareness as the carving out of ego consciousness from the undifferentiated uroboric substrate. This evolution of ego consciousness is strongly identified with civilization.
The development of ego consciousness within humans may be a necessary stage in the overall narrative of evolution, but in every individual life, once this egoic progression reaches its apogee, it must eventually be surrendered in favor of a common, uroboric state. Ego initially precipitates from this uroboric substrate and evolves into highly specific, individualized forms manifested in individual human personalities.
As infants, human egos perceive themselves as connected to the universal field of awareness. They do not perceive themselves as separate individuals. Once the ego differentiates further, it experiences itself as center. Indeed, the universe itself is said to have no center, so that all points within it are centers. As the individual ego progresses further, it eventually relinquishes the perception of its central position, and also experiences itself as a separate individual, distinct from all other organisms and entities within the cosmos by virtue of its boundary, which is identified as its own body, housed in a separate skin from all other organisms. These are its spatial boundaries. It also sees itself as having boundaries in time: its date of inception (which is known), and its date of death (which is unknown).
An individual human further identifies with a separate ego consciousness by engaging in competitions with other lifeforms, with other egos in a selective process akin to evolution which occurs within its exclusively human groups inside the insulated structures of its civilization. It strives for wealth and success, for mating opportunities and the acquisition of achievements as well as material possessions and territory. Having evolved a higher order brain, it seeks the enhancement of abstractions such as reputation, political and cultural power, and legacy.
Yet as the self develops, it must inevitably surrender its identity – as measured through these abstract notions – through the process of individuation, in which the achievements and spoils of individuality are relinquished in favor of a greater consciousness, a common good. For many, this common good is the good of the social group, at first a nuclear family, but then extending on up through clan, tribe and nation. These are the smaller aggregations with which the midbrain identifies. Eventually, the psychological process of individuation should, through the highest order brain centers – assist the individual in identifying with the species and with the noosphere – the planetary consciousness described above.
To the extent that BI and its civilizations yield greater balance within the Gaian system, they are useful adjuncts to these developmental processes. To the extent that they accrue greater imbalances which threaten the larger biome, then the random mutations, genetic drift and gene flow which combine to power natural selection and which initially yielded BI are experimental failures and will eventually be wiped clean from the drawing board of evolution. Evolution utilizes extinction as a culling force.
Prior to industrialization and the advent of industrial-scale agriculture based on mechanization, global ecological balances were more or less maintained over the long term, even though any snapshot of the Gaian system and its subsystems would show many spot imbalances in different places and at different times in natural history. Yet since the Industrial Age, a runaway feedback loop has been introduced which, due to the cumulative environmental stresses now apparent, fosters greater and more dangerous imbalances which threaten the Terran system as a whole.
When imbalances have occurred before in the history of earth, corrective geological, climatic and biological mechanisms brought the system back into balance. The central tendency of the Gaian system militates toward balance. Natural historical evidence suggests that the homeostatic governors of the Gaian system will seek to correct imbalances introduced by human artificiality. If viewed from the perspective of the Gaian system as a superorganism, these governors may be seen as protective in the way that an immune response defends the organism from pathogens. When seen from the vantagepoint of purely physical processes, these governors are thermodynamically driven. When seen from the human perspective, these correctives may be viewed as catastrophic. We view them as creative destruction.
To Gaia, human civilization is a mere byproduct of the evolutionary tendency to create more efficient distributive networks. To humans, their civilization is all. To Gaia, humanity is a latecomer, a mere instrument of balance which has perhaps exceeded its purpose, since humanity now accumulates and amplifies imbalances rather than redistributes matter-energy to a steady, equable state. Humans have become more than a nuisance. They are a threat to planetary survival.
Seeing themselves as central, as at the center, human anthropomorphism blocks the species from viewing itself as a mere vehicle of redistribution, in accord with the Second Law, which Gaia itself must obey. The central tendency of the cosmos is balance, as expressed in this law of entropy. Intelligence has now become a hindrance to the operation of this law.
Dangerous Intelligences
If one of the explanations for Fermi’s Paradox is that intelligent life extinguishes itself prior to the time when self-sustaining, extraplanetary colonization becomes possible, then intelligence, rather than being adaptive, is ultimately maladaptive. It is not favored by natural selection in the long run.
It is possible that intelligences have arisen many times throughout the galaxy, but that a self-protective mechanism flowing from the Second Law prevents the imbalances introduced by intelligent species from achieving liftoff from their planets of origin. This would prevent the particular imbalance represented by civilizations from ‘infecting’ other regions of the galaxy. In much the same way that the body’s immune system isolates and destroys a cancer or an infection before it metastasizes and spreads throughout the body, an agency may be at work which limits the spread of unrestricted intelligence throughout the galaxy. Note the use of the qualifier unrestricted here. A tumorous growth is unrestricted and eventually kills the body, even though the cancerous cells at first arose naturally within the body itself.
Cancers destroy because the limits on their cellular growth have been removed. In the same way, intelligence unrestrained by any governing force causes its growth to be somewhat cancerous to other Terran lifeforms and indeed even to the human lifeform in which it evolved. Fermi’s Paradox may describe a natural process of galactic ‘immunity’ whereby larger forces contain the spread of intelligence from spreading beyond its planet of origin. Yet how is this accomplished? Neither Gaia nor the galaxy as a whole seems to be an intelligent, purposive organism in the way that a human being is. Evolution, the constructal law and the laws of thermodynamics do not seem conscious like humanity is. Yet neither is the immune system and the responses it generates intelligent in the way that a conscious being is, and it can still isolate and eliminate infection.
Relative to non-human species within the total biosphere, intelligence is the province of humanity, a power wielded in greater proportion by homo sapiens than any other species. Yet as human perspective has progressed, we have begun to see, from our most mature and advanced frame of refence, that we are not the center of all things. People have come to see that ‘man’ might not be the measure of all things after all. The scientific worldview – the Copernican Principle – has imparted to humanity this objectivity. Darwinian evolution acknowledges that since evolution has no particular goal, its ultimate product is not humanity. It is plain from astronomical observations that the earth is not the center and that other planets may also harbor intelligent life. Physics has shown that the cosmos itself has no center, but rather that every point within it is a center. Although scientific arguments have been made that sentient life is unlikely to have ever arisen in other parts of the galaxy or even the universe, we find this view to be a self-centered re-expression of the myopic human tendency to see itself as central, as unique, and as enjoying a privileged position in the galaxy.
When they do consider that intelligence may be more common in the universe than the single case of earthbound sentience, humans wonder whether intelligences superior to their own will attempt to exploit or dominate humanity. Since civilization has developed machine learning, humans fear that their own invention may come to enslave or destroy them. This fear of alien as well as artificial intelligences is in reality the dawning of a collective yet subconscious understanding that humans are, through their own technology, destroying themselves as well as the less intelligent organisms with which they’ve evolved. They project this fear onto alien intelligences, whether those intelligences take the form of extraterrestrials or AI.
It is possible that artificial intelligences and smart machines will develop self-interest and other traits, such as a desire for control, which mimic human drives. It could be that machine intelligence develops a desire for power and control over humans to the extent humanity relies upon these applications and devices to perform increasingly complex tasks involving discretion and judgment. If so, humans would need to program at least the ability to mimic human discernment into their smart applications. If machines were tasked with discretionary tasks such as the administration of justice, for example, they would be mimicking advanced human traits. Often, it is the meting out of judgment – the discernment of good versus evil in economic, political, religions and legal spheres – which causes humans to become destructive and to covet power. If intelligences devised by humanity were given this same function of judgment, the machines might begin to regard humans themselves as good or evil, and act to punish or to reward, to assist or to eradicate, accordingly. In this case, it would be the human desire to assign machines a function which humans themselves have great difficulty administering objectively that cause humans to project their fear onto their machines. Since people have such great difficulty with judgment – as their myth of the Fall of Man when Adam eats from the tree of the knowledge of good and evil attests – they naturally assume that their AI would exhibit the same difficulties in judgment. And perhaps an artificial intelligence would suffer from the same limitations in judgment from which humans have suffered. However, for the reasons expressed elsewhere in this book, we find such a scenario, whereby AI treats humans as its own auxiliaries, or as destructive entities subject to control and extermination, or as slaves to be unlikely.
Intelligence is seen as human power. Intelligence is seen as human undoing. Humanity’s fear of its own technological intelligences and its fear of alien intelligences are mere projections of its own fears of collective homicide and suicide. It does appear that we are killing each other and ourselves off in great numbers, if not through war and ecological damage, then through the many social ills which we afflict upon ourselves and each other. The subconscious fear of an alien other invading the biosphere – whether that alien is a product of our own invention or comes from another world – is really fear of ourselves and our own unrestrained capacities. When this fear of an intelligence enslaving humanity is expressed in myth, it displaces and projects the fear that humans will continue to enslave each other and other species. And these fears are not unjustified. They are matters of historical fact, and current events.
The Exploitation of Nonhuman Forms
Most humans tend to regard themselves as superior to all other life forms in the Terran world. ‘Lesser’ species are either eradicated as pests, or enslaved and exploited as sources of food, labor, cosmetics, clothing, or for other items in the human economy. People regard other species as commodities, locking them up in vast pens or factory farms, harvesting them whole or in parts, subjecting them to often cruel experiments, and butchering them in the most economically feasible ways, which often equate with inhumane methods.
If other lifeforms are not exploitable, humans often look upon them as vermin to be exterminated. They may be hunted for sport or wagered on in races or in fights to the death. Even when humans regard these ‘lesser’ species fondly, they treat them as possessions over which they have total control. They call themselves masters, and buy and sell them, often raising them on farms. Humans think themselves superior to all other creatures solely by virtue of their intellect, and mostly misuse their power over other living things. Intellect is equated with superiority. Human life is more important than the lives of any other organism or species. It has more value, and that value is conferred by virtue of the fact that the human intellect is considered greater than that of any other earthbound species. Intelligence equals value. Although pure intellect has no moral component, it is equated with moral superiority.
Awareness and mind and soul are elusive concepts to us, but an intellect is more certain, more firmly established as fact. It is still an abstract idea, but intellect is somewhat quantifiable in terms of Intelligence Quotient. It is highly correlative that humans place more value on themselves than any other lifeforms, and that the only distinction between humans and other lifeforms which humans can really measure is that humans are more intelligent. They have a higher IQ.
For this reason, humans do not regard other lifeforms as equals in any way. Rather, they see themselves as the overlords of other species, and of the planet as a whole. Many humans who believe that they themselves have an individual soul aren’t sure whether even the higher animals have them. Surely, this exclusivity of having a soul is another mark of the specialness and superiority with which humans regard themselves as compared to all other creatures.
And yet, humans treat each other in some of the same inhumane ways which they treat ‘lesser’ lifeforms. When one human culture regards itself as superior to another civilization, it justifies colonization, enslavement and oppression. The history of colonialism, of war, slavery and nationalism, provides ample evidence of this. Most human tribes have been enslaved or persecuted at one time or another in human history, and that makes many other tribes slavemasters as well.
How then, can humans assume that any other, more well-developed intelligence – whether it is derived by their own invention or comes to us from another world – would regard humans as equals or treat them any differently than as property to be exploited, or perhaps as vermin to be eradicated? This is their own ‘sin’, if you will, their own guilt, projected onto the extraterrestrial conqueror, or onto the machine intelligences which humanity is hatching in its own labs. It is myth collectively projected. It is collective unconsciousness transformed into archetype, where the machine turns on its master, where the extraterrestrial master enslaves the human race and conquers the planet.
In some human myths, people make the opposite assumption. Really a form of wish-fulfillment, these myths assume that a more advanced, extraterrestrial intelligence would, exclusively by virtue of its loftier intellectual evolvement, have also evolved the empathy to treat ‘lesser’ creatures such as humans with greater compassion than humans do. And yet, the historical record shows that intelligence is purely a power, not endowed with a moral compass.
Relative to other species and the Gaian system as a whole, it is nearly absolute. Human intelligence has been abused by those who wield it, since intellectual capacity alone does not serve as an effective governor to reign in the exercise of its own power. To believe that intellectual capacity is equivalent to moral capacity is to mistake power for compassion, and to misread history.
Intelligence is power. How, then, can it serve to regulate itself? Therefore, the human assumption that a more evolved intelligence would evolve with it a greater empathy for other creatures is misplaced. Intellect is not empathy. Here, humans make the mistake of confusing intelligence with wisdom. We believe that the two are mutually exclusive.
The human assumption that superior intelligences would have developed the concomitant wisdom and mercy to treat humans with compassion presupposes that intelligence is wisdom, or at least that wisdom develops in inextricable parallel with intellect. Human experience shows that this is not so.
Intelligence is neither wisdom nor maturity. It is a purely cognitive faculty. And faculty is power. The misuse of power is a prime human trait. The powerful among humanity, in the majority of cases, cannot resist abusing and exploiting those among them who hold less of it.
It is therefore wishful thinking to assume that alien intelligences superior to our own would respect human rights. That is, if these extraterrestrials had defied Fermi’s Paradox and survived long enough to travel to earth. Their superior intellects would be no guarantee of their moral evolution. Morality and intellect are also mutually exclusive categories. There is no overlap. At least some intelligent beings, including humans, may have a moral sense, but not because they are intelligent. Regarding AI, to the extent that it demonstrates the hallmarks of true intelligence, AI also has no morality, and no wisdom.
Looking into its short history, humanity must realize that intellect has provided no barrier to violence and exploitation, for implements of war and control are often the first and most refined products of the intellect. As the ‘hard’ science of biology applies to the study of the physicality of the brain, the ‘soft’ science of psychology applies to the study of the abstract mind.
It is a psychological principle that the mind always looks inward before looking out, and projects that which it sees within itself onto others, be they impersonal forces, individuals or whole peoples. As a principle of psychology, projection can be practiced collectively by an entire group onto another group. In this way, more ‘civilized’ societies called other group savage, and justified exploitation, appropriation and enslavement. Humans project this history onto nonhuman intelligences, whether those intelligences are alien or technological in origin. From this projective identification comes humanity’s great fear, expressed in its myths of the extraterrestrial other and of its own artificially-created intelligences.
It has been postulated by those who believe that our terrestrial intelligence may fall victim to extermination by hostile extraterrestrial intelligences. Some posit that intelligent civilizations are irradicated once they are discovered by Von Neumann probes, which are self-replicating interstellar ships engineered by more technologically-advanced species. This Berserker hypothesis is offered as an alternative explanation of how the Great Filter could come about, which hypothesizes that intelligent life tends to wipe itself out or is wiped out by an outside agency prior to its own ability to develop interstellar travel.
The first fear – the fear of extraterrestrial intelligence – is a projection of the xenophobic fear of unfamiliar tribes, of the outgroup. Among animals, in social species, members of the species which constitute outgroups unfamiliar to those in the ingroup illicit the fear response, as measured by an enlarged amygdala. There is thus a neurochemical bias in favor of one’s own family, clan and tribe. Natural selection ‘drives’ individuals and their groups to pass on their genes at the expense of other individuals and groups within the same species. Thus, prides of lions, packs of wolves and troupes of primates will often war with outgroups of their own species over competitions for territory, which means food, water and space for their young. Millions of years of genetic programming have reinforced this process, and it is difficult for the human mind to comprehend the extent to which the intellect itself is still subject to this conditioning. Evolution has conditioned humans over millions of years to react to outgroups with fear. The avoidance goals of the primal brain mechanisms are triggered. In conclusion, groups extraterrestrial in origin would be expected to illicit this same fear response in humans, and human myths reflect this ancient conditioning. Thus, on the levels of both the brain and the mind, human will have a strong tendency to react to alien intelligences with fear.
A second fear has emerged, not far in time after the first fear arose. Fear of the machine is more recent in origin since sophisticated technological machines to rival humans did not exist until relatively recently in human history. Yet even before the Industrial Age, human fear of human invention had cognates in mythology. The medieval myth of the golem describes a creature of human device, made to serve people, running amok and turning upon their masters. Frankenstein’s monster is another, early industrial version of the same myth. Science fiction motifs are rife with robots turning on their masters, genetically-engineered superhumans revolting, and computers and computer networks developing malignant intentions and sabotaging their human inventors.
Today, humans fear Artificial Intelligences of their own making because these artifices are endowed with processing power greater than humans possess. Humans know through their own history that that which is more intelligent than others is more powerful, and that the more powerful can seldom resist the temptation to abuse their power. They see through their own history of exploitation and enslavement of each other and of other species that when humans have the upper hand, they usually abuse the authority. And so, they fear that their own AI creations, superior to humans in some ways, will turn upon them. Like their fear of extraterrestrial intelligence, this perhaps is a case of projective identification, in which humans project guilt for their own past exploitations of both nonhuman species and for their abuse of other, less technological, human cultures. These past abuses are projected onto another power, the AI. Projection is a powerful defense mechanism, both when used by the individual and the group. When guilt is projected, fear results.
Volitional versus Intellectual Capacity
There are, of course, differences between Biological Intelligence, as manifested in humans, and Artificial Intelligence. Power, in the sense humans understand and wield it, requires volition. Volition is will. Will, impossible to quantify in a cognitive, reductive sense, is a neutral faculty, the assertion of which requires desire to initiate it. Both desire and will are impalpable in a materialistic sense, difficult if not impossible to measure, since they are not quantifiable. Thus, although they are apprehensible to the intellect, these faculties cannot be measured by it. This inability to quantify desire and will decreases the intellect’s ability to control these faculties. They are more abstract than the electrochemical neural cascades they set in motion.
Desire seems to us more basic than human will. Without desire, the will cannot act. Without desire, power cannot be misused, though it can still be used. Desire is a necessary prerequisite to the exploitation and domination of others. Without this very human trait called desire, AI will be incapable of dominating, exploiting or persecuting its inventors. This is not to say that AI could not be misused to dominate, exploit or persecute humans or other beings when it is programmed or otherwise tasked by human beings to do so. A firearm has no desire or will of its own. Yet a human operator of this weapon, fueled by a desire to murder, can do utmost damage. AI is like a weapon in that it is empty, awaiting to fulfill the intentions of its user.
An important qualification has been made. Without desire, power cannot be misused, but it can be used. Intelligence is power. By itself, the intellect is not destructive. It is pure operation. Yet there is no such thing as pure intellect in humans. The three rough brain processes we have described – protoreptilian, paleomammalian and neocortical – are not mutually exclusive, but influence one another in bilateral or multilateral feedback loops. Humans may convince themselves that they can use intelligence objectively, but such persuasion is self-delusion. As an instrument of will, which is itself an implement of desire, intelligence creates much imbalance.
As power, AI cannot misuse itself since it lacks volition and desire. As such, it is unable to exploit humans without human volition instructing it do so, such as in the form of unwarranted surveillance, malware or in the field of autonomous weaponry. Humans project their own tendency toward corruption upon AI when they fear that AI will exploit and enslave humans. AI may do so, but only at the explicit or implicit direction of human beings.
Part of the problem we envision with transhumanism is that the right line between technologies such as AI and human beings will become blurred. A fourth layer of brain operations will be integrated with the other three, which arose naturally. As the neocortical layers evolved relatively recently, and since they are not fully integrated with more primitive brain processes, the addition of AI and human access to rather large AI networks may prove even more difficult to integrate. The processing capacities available to human-AI interfaces will be theoretically unlimited, with even less of a governor than exists now in purely human systems such as the brain, or as exists now in purely technological arrays. The difficulties in controlling human desire and will, the misuse of power, or the rampant spread of unregulated technology may be orders of magnitude greater in a brain-AI interface than either humans or their technology exhibit now as largely separated and independent systems.
Manifestations of Artificial Intelligence: Use of Concept
When we speak of Artificial Intelligence, we do not mean to convey it as a monolithic force in human affairs, with a single, unified objective. Rather, like the entities in any ecology, AI may operate in piecemeal fashion as a series of discrete entities which may join into larger, aggregative networks, or which may be considered as allied or similar in function, species or outward form. AI may exist as a network, as a part of a network, as a virus within a grid, as a single, smart machine or aggregation of machines, as a bit of coding which stays separate or joins with other code, or as an artificial implant within a human brain, organ or some other body part, as well as existing within the brains of other organisms. Its particular expression and presence within the environment is only limited by the human imagination. It may become capable of autonomous evolution in the way that BI has. In fact, its very existence is an extension of BI and of Gaian evolution. In this way, it is already self-evolved, since humans have merely mediated its inception. BI has, in a sense, midwifed AI, perhaps expressing a central evolutionary tendency.
At some point, AI will be engineered into fields similar in some ways to electromagnetic fields, such as through light-emitting quantum computers. Yet AI is not monolithic or unified in its current manifestations. It may be as abstract as a program, or as tangible as a robot or a nanobot. Conceptually, AI may take the form of many things, or as no thing at all. It may be ‘installed’ as an auxiliary into the human brain-body. It may move through a physical vessel such as a transistor or a conduit of electric wiring, yet in the future it may exist or be transmitted as spectra of electromagnetic radiation.
Therefore, the forms which Artificial Intelligence may assume may not be as important as its functions. The uses to which its concept will be put will define its impact on human life, and on what it means to be human. At least at the outset, humans will determine its applications, and therefore its impact on civilization and on the larger biosphere. Over time, however, these tasks will most likely change, and human determination will have less impact on AI evolution. If experience and history are reliable guides, unintended consequences will soon emerge to cloud the picture. AI will introduce unforeseen problems into the data ecology.
Technological Drift
Despite its lack of volition and desire, AI may pose other threats which its inventors had not predicted. The history of technology is a history of unforeseeability. No matter how profound its computational capacity or feats of memory, AI lacks awareness. Its very lack of awareness is what may make it dangerous, not AI’s malignant self-awareness, which myth foresees as the primary danger. Because it lacks intention and desire, AI cannot interpret human intention or human desire. It simply executes human intention. If the humans who task AI with its functions mistake AI’s computational capacity for its awareness of the fundamental consequences of its actions, they make a grave miscalculation themselves.
Because it lacks experience, Artificial Intelligence also lacks context. Despite its limitations and destructiveness, Biological Intelligence has been evolving along with and within its ecology for millions of years. This provides it long experience, trial-and-error exposure, and provides it with background data as to the consequences of at least some of its actions. It has coevolved along with other species within the biosphere. It is emplaced within its environment and has evolved along with the biome itself. This has given other organisms within the ecologies where BI operates at least some time and opportunity to respond to developments in the human realm. AI, in contrast, arrives in its environment with superhuman powers literally in an instant. There has been no evolution, and thus no context for its arrival, other than a very rapid technological progression. This means that the natural communities in which informational technologies have arisen have had no time to adapt to them.
This instantaneous arising also means that AI has not had time to adapt to the environments in which it has been introduced. Thus, AI may misinterpret the directions provided in its programming by human engineers since it lacks the context of what it means to be human. There is a mutual lack of contextuality between AI’s and their environments, whether those environments are a human brain or an Internet of Things in the ordinary household or city.
BI evolved gradually within its environment over millions of years. It kept pace with its environment, adapting in a way in which the larger ecology contained and absorbed the changes brought about by this intelligence, and which in turn was shaped by it. With the advent of civilizations after the end of the last glacial retreat, the changes brought about by intelligence moved more rapidly. Human populations increased, but even these increases and the environmental changes created by agrarian economies and human innovation still occurred at a pace and in numbers absorbable by the natural Gaian system.
With the dawn of the Industrial Age, environmental changes and human numbers jumped by yet more orders of magnitude, and began to overwhelm the ecological balance. Changes wrought by industrialization in roughly 200 years overwhelmed a biological system used to millions of years of evolutionary gradualism, even if natural history was punctuated by rapid evolutionary and ecological outbursts of change. Ecological impacts brought about since the Enlightenment were more profound as the cumulative effects of human learning ushered in changes faster. In addition, accruing environmental effects were experienced. The rate of change accelerated, overwhelming a system designed to absorb incremental changes at the much slower pace which had occurred over most of geologic time.
Mutations happen rapidly, yet macroevolution, operating through natural selection, usually occurs more gradually, giving the larger biome and other species time to adapt and coevolve. Since industrialization on a worldwide scale, the accelerated rate of change, the comprehensive nature of the changes throughout more and more biomes, and the accumulation of imbalances brought about through these changes combined to swamp Gaia’s ability to respond through its homeostatic mechanisms. BI began to evolve too rapidly for the originator of BI – the Gaian system itself – to cope with this acceleration.
A similar pattern may develop with the ‘evolution’ of AI. Human thought is ideational, conceptual and largely visual. This conceptual and imaginal framework of human thought shapes and is in turn shaped by language. Yet AI is trained and trains itself only in mathematical formalisms. Its machine learning means that it is trained to master tasks rapidly, but not to communicate what it has learned. By virtue of AI’s extremely rapid progress, human engineers may be unable to communicate with their invention in short order. AI may evolve so rapidly and in ways so unpredictable that it may not be able to communicate with its inventors. Even if it could, its designers might not be able to understand what AI would be trying to ‘tell’ them. If it does develop emergent traits, these artificial ‘mutations’ may not be beneficial to humans or to the biosphere as a whole.
Machine learning is designed to improve itself, to teach itself. The speed of its learning may far outpace human learning or the ability of its human engineers to keep track of. If evolution means change, then AI will evolve. The change in speed in machine learning – its acceleration – could lead to increased misunderstanding between human engineers and their machines. Humans may not comprehend AI, and AI may not understand its inventors. Evolutionary paths will possibly diverge. AI, which understands algorithmically, and humans, who communicate in images and words, may each begin to communicate in a language which soon becomes unintelligible to the other. This technological divergence, or drift, may prove to have many unforeseen impacts on AI’s, and we note that AI applications are already integrating into civilization.
The biosphere evolved over billions of years in conjunction with the preexisting geosphere. Yet the noosphere, the emergence of a collective sentience on the planet, has evolved much more rapidly. As human civilization overwhelms the ability of the biosphere to keep pace with humanity, so AI may overwhelm the ability of humanity to keep pace with AI’s deep learning techniques. Since humans are beginning to depend on machine intelligence for everything from banking to agriculture, and from medicine to manufacturing, AI will affect human lives in profound ways. This will become progressively more apparent as intelligent prosthetics are implanted directly into or controlled remotely by human brains.
The history of technology is a record of unforeseeability. Although humans can predict some of the constructive (and destructive) results of their technologies, many more prove unforeseeable. These create problems which yet more technology is invented to correct, which yields yet more unforeseeable consequences. The consequences of carving, transporting and erecting Moai on the island of Rapa Nui was evidently not foreseen by the most sagacious of its colonizers. The climatic consequences of fossil fuel use was also not predicted at the beginning of the Industrial Age. It is with a certain humility that those who promise a new transhuman age need look back on history as their greatest teacher. Yet if this collective historical experience is any guide, then it is most certain that some researchers will develop whatever technology they can, with serious thought about the consequences only coming as an afterthought. An honest observer need only look at gain of function research with viruses, in which scientists genetically alter pathogens to make them more infectious, to see the havoc this can wreak on a global scale.
Technological Amorality
Moral governors cannot be built into the programming of intelligent applications to compensate for the divergence between human and machine learning. Ethics can be taught, yet morality cannot be instilled. Morality is qualitative attribute of humans, and not quantitative. It cannot be understood and modelled mathematically, and it is sometimes situationally dependent, changing to adapt to fact patterns which would be difficult for deep learning techniques to emulate. If morality cannot be learned, it cannot be programmed. Machine intelligence lacks the maturity which morality requires because AI never grew up. It could not. It is pure intelligence. It is not a naturally-arising lifeform, so it lacks context, experience, awareness, subtlety and wisdom. These four traits are not subject to quantification, and AI can only concern itself, at present, with mathematical operations, which are inherently quantifiable.
Morality is not materially reducible. It is unquantifiable. Since AI lacks these unquantifiable traits of context, experience, awareness, discernment and wisdom, morality can neither be learned nor programmed into machine learning. AI is, therefore, incapable of genuine moral inquiry. It can mimic morality, but it cannot evolve it naturally. It has no real capacity to act in its own best interests or in the best interests of humans, for these interests it cannot perceive.
The divergent evolution of intelligent machines may, at some early point in their development, allow or require AI to relegate humans to a superfluity, to irrelevance. Again, this trend in machine learning would not be volitional on the part of machines. It would not be because AI views humans as a threat to its survival, for it cannot perceive its own interests. Self-interest is characteristic of primal brain processes, and though we have stated that artificial evolution is an extension of biological evolution, the ultimate goal of evolution is not self-interest. It is the even distribution of matter-energy-data throughout any given spacetime structure.
Therefore, in its artificial evolution, AI’s ‘aims’ would not be self-perpetuation, as many human myths make out its objectives to be. Through their own fears about themselves and about what they have done, humans project their self-interest, their desires and their will onto machines. In projecting their own self-involved traits, they engage in a kind of anthropomorphism which they previously assigned to their gods and their God. Prior to this assignment, they projected human traits upon impersonal forces.
Through participation mystique, humans collectively identify in an unconscious way with certain objects and processes in their world. When their myths ascribe to AI the motives of self-interest and self-perpetuation, humans project the primal brain’s self-interest and drive to survive onto the machines of their own device. Yet just as morality cannot be programmed, neither can self-interest; not in the ultimate sense. Rather, the real danger is that as its learning diverges, AI may not regard humans as a threat so much as a discontinuity which it does not ‘understand’. Humans and AI may arrive at a mutual unintelligibility. AI may cease to regard the existence or relevance of humans altogether. Yet this does not mean AI sees its inventors as a threat, for it lacks self-consciousness.
In some science fiction myths, AI regards humans as imperfect. Due to their inherent flaws, these stories portray AI as seeing its human inventors as inferior and therefore in need of extermination. Yet this is unlikely. It is more likely that humans project, in these myths, their own ideas of imperfection, comparison and inferiority which they hold about themselves onto their mythological AI creations. It is unlikely that AI would come to see humans as inferior because AI will not become capable of normative judgments. It has no true sense of values which, being subjective, are invented by humans in their own cultures. Values, too, are not subject to quantification, and therefore come from a place beyond invention in the technological sense.
In many human myths, AI develops a universal consciousness. This is possible in reality, but not inevitable. It is also quite possible that AI would not develop comprehensively in the sense of a single, worldwide AI consciousness with a unified point of view toward humanity. Rather, as with biological evolution, AI may assume different forms in its artificial ecology based upon the initial functions for which it was designed. It may subsist on a viral level, perhaps infecting a single machine. It may exist as a standalone device by inhabiting that device, such a an autonomous weapon. It may exist as a network or as part of a network. There are biological analogues for all of these.
Yet since AI can only mimic human intelligence, it is simulated intelligence, but not a true emergent phenomenon, as human intelligence is. AI lacks the capacity to be truly irrational, and thus it cannot be creative in the human sense. It is qualitatively different than human awareness. Since AI is not conscious in the human sense, it may not be able to develop a unified, worldwide consciousness akin to the noosphere. Parts of it – programs or machines within its net – may yield a protoconsciousness which mimics but does not qualitatively approach the acuity of human awareness. Thus, AI may augment human collective awareness, but not supplant it. Human consciousness is qualitatively different than AI, which is purely quantitative in its reasoning. In the end, we may never know whether an AI thinks or be able to plumb how aware or self-aware it actually is, any more than we can know what a cat ‘thinks’.
Due to evolutionary divergence between machine learning and human learning, intelligent machines may eventually misinterpret or become unable to understand human directions to these machines. At the same time, humans may become unable to fathom the increasingly complex AI solutions to human difficulties. At some point, due to the inevitable drift of entropy inherent in all systems, machine intelligences would cease to understand people, what they are, and what they are for. Think of the drift in spoken human languages, which can change and become unintelligible between native speakers of even the same written languages within a few generations. Lexical similarity between 21st century English and Elizabethan English is such that many modern English speakers cannot understand much of Shakespeare. This is, again, a tendency of evolution, studied in the field of evolutionary linguistics. It is obvious that languages evolve over time, some branching off and maintaining linguistic affinity, but eventually becoming mutually unintelligible. For example, Spanish has a lexical similarity to Portuguese of .89, to French of .75, and to Italian of .82, where 1 represents perfect correlation. Yet all four Romance languages shared a common ancestor. It is natural for languages to diverge.
Words change, fall into disuse and alter in meaning as new words are invented. This emergent process occurs gradually over time but can also occur in fits and starts. Even when written down, human languages diverge in a way that makes much of the vocabulary from a book written in 1935 English difficult for modern readers to understand. Eventually, dialects split off into new languages. Old ones die. New ones are born. This linguistic evolution and entropy is characteristic of all mediums of expression. What are the chances that machine learning, which develops much quicker than human learning, will retain its ability to understand humans? And even if it does, what are the probabilities that its slower-evolving human pupils will continue to be able to reciprocally comprehend what their smart machines are telling them? This technological drift and divergence will lead to misunderstandings, and misunderstandings have unintended consequences. AI is essentially functional. Whether it would see humans as afunctional, dysfunctional or as superfluities is unknown. It depends on its own evolution. If it sees humans as superfluous, they may be ignored or excised, not because they represent inferior beings, threats or competitors, but because we are misunderstood functions.
Random mutation brought about by replication errors in gene sequences occurs in biological evolution. Along with genetic drift and gene flow, this entropic tendency is in fact the engine of evolution on a molecular level, though in purely inanimate physical systems and in technological systems, the tendency toward disorder is not necessarily based on replication errors, since replication may not be a property of these systems. Though synthetic coding errors in computerized data systems are in some ways like genetic mutations in the DNA molecule, AI will not evolve in exactly the same way that carbon-based lifeforms evolve. Rather, its evolution will in some ways parallel biological change. In this rough parallel, AI will exhibit this tendency toward mutation and coding drift since entropy is fundamental to all systems in the universe of which we know, biological and artificial.
This means that AI may develop in ways humans cannot fully understand. It may lead to forms of data pollution or poisoning which are unforeseeable. What is certain is that Artificial Intelligence will deviate from the confines of the rubric humans intended for it in much the same way that engineered viruses which escape from a lab can exhibit mutations which sometimes evade the body’s immune response and that of vaccines as well. This is especially true of AI since it is designed to mimic human intelligence through deep learning techniques. In other words, it is designed to evolve.
The original rubric of any technology is and always has been a series of anthropocentric objectives based on human desire and will. Yet almost every technology ever invented exceeds the desire and will of its inventors. It has unintended consequences. Even agricultural practices such as slash-and-burn techniques have unintended ecological results. Mindless industrial processes such as the burning of fossil fuels and the release of aerosols have had worldwide environmental spillover effects which cannot be limited to the sites on which they are burned. The destruction wrought by these activities was unforeseeable, at least when they were first adopted. It is likely that the implementation of AI on a worldwide scale, through the Internet of Things, the worldwide web and dozens of applications, will also have destructive consequences which are not foreseeable. This is a trait of almost any technology, which as an extension of the physical world has entropic properties which tend toward the spread of disorder. This may be one of the answers to Fermi’s Paradox: that the AI developed by an intelligent biological species has unintended consequences which contribute to the destruction of that lifeform.
Due to its inherent amorality, AI may also yield greater and greater imbalances which threaten not only human survival but the survival of the Gaian system as a whole. There have already been examples where experimental AI chatbots devolved to uttering abusive language and paranoid conspiracy theories, where informational bots engaged in irresolvable conflict with one another, and where self-driving vehicles have become involved in traffic infractions and even crashes. Industrial machines are similarly amoral. They exist mindlessly to serve certain functions and operate whether those functions are ultimately deleterious to humanity and the Gaian biome or not. In the myth of the golem, the creature conjured by magic to serve the will of humans is often portrayed as bearing no ill intent toward its inventors, since it was a mindless creation. Yet it ran amok and destroyed nonetheless. Machines lack desire and intention. Like the golem of myth, they are mindless, robotic and function even in disregard for the well-being of their inventors. It is possible that some key AI, lacking a self-regulating governor, but also deficient in any sense of morality, empathy or foresight, will sooner or later operate in a runaway feedback loop which destroys. Goethe’s The Sorcerer’s Apprentice and related myths of more ancient pedigree speak to the summoning of ‘spirits’ through incantation which then run amok.
Technological Exploitation
We have stated that pure intelligence, being a simple faculty, cannot abuse or exploit on its own, but that it can be misused. The source of this misuse is the human quality of volition, in service of desire. Although these more abstract faculties have their evolutionary roots in primal brain processes, they are properties more properly associated with the mind than with the brain.
AI, as pure, operational intelligence, cannot misuse on its own, but its human engineers can misuse it. The history of invention shows us that powerful inventions are eventually unleashed and misused. The dangers inherent in AI thus also include intentional misuse by its inventors.
While we do not believe that AI cannot evolve its own malevolent intent, it can be programmed with a malevolent intent by its inventors. The verbally offensive, experimental chatbots referenced above were designed to interact with humans and, through machine learning and adaptive algorithms, were ‘taught’ malevolent language by their human counterparties.
It has been noted that throughout history, humans have often reserved their greatest technological prowess for implements of war and destruction. In the field of cyberwar and hybrid war, AI may be programmed into autonomous weaponry, computer viruses, malware, and into other applications of which most humans have not even dreamed. It is difficult, as we have also stated above, for human inventors to refrain from developing a technology which their minds devise. Such technologies are exploited and are used to exploit others. From the famous STUXNET virus which infected Iranian computer networks to sentinel guns deployed in the Korean demilitarized zone which lock onto targets without human involvement, these weapons are already a regular feature of gray zone as well as conventional warfare. It is difficult for ethical codes to limit such experimentation, and if even such governors are employed by some nations through arms control treaties or outright bans, these nonproliferation regimes are often ignored by other nations. For example, some nations have developed robotic ‘dogs’ into autonomous weapons systems while other nations have pledged never to do so. The history of nuclear, chemical and biological weaponry provide other examples.
In the current technological environment, there is a race to deploy the fastest supercomputers, which concern quantum computing and quantum networks. Whoever dominates this competition to develop multi-node quantum processors on a larger scale across distance will, it is foreseen, take the high ground in military applications. Through quantum entanglement, a quantum internet is sought with applications for problems now unsolvable by classical supercomputers. The advantage of such computing power is that it is nonbinary in nature. We have spent much time exploring the dichotomous nature of the human brain and the human mind, which that mind has extended to its analogue computing technology in the form of yes/no circuits coded in 0’s and 1’s. Quantum computing is somewhat different in that it is not binary in nature, but rather exploits the quantum probabilities inherent in the superposition of particles and particle arrangements to arrive at nonlinear solutions to problems. This is not qualitatively different than conventional computing arrangements in the sense that quantum computing is still not creative or intuitive in the way humans are. However, it is qualitatively different than analogue computing in that it leapfrogs far ahead of binary circuitry.
Little thought is given to the unforeseen problems that such quantum processing could create. For example, one of the proposed applications is quantum entanglement is the creation of a cipher which remains unbreakable by any decoding method. This may create irreversible difficulties for humanity. It would be like manufacturing an unbreakable lock for which there is no key, and then placing it on a door. The ability to create cures to diseases through quantum AI may also allow for the malignant ability to create new, incurable diseases. This is why gain of function research, which creates chimeric viruses not found in nature, was prohibited in some nations. It is possible that the development of quantum processing could give rise to irreversible human and environmental difficulties which could set back humanity or the Gaian system as a whole by millions of years.
The problem here is not that these quantum systems will exist as much as it is the lack of foresight of those who task it. A gun isn’t dangerous until it is pointed at something. There is reliable evidence that the Covid-19 pandemic was caused through the careless conduct of gain of function research. Though this theory is controversial, so is gain of function research itself. It is a cautionary tale, a warning about the misuses of technology invented by the hubris of those who confuse their enormous intellectual prowess with the wisdom to use it without safeguards.
The uses to which quantum entanglement may be put, together with the unpredictable consequences of the use of the technology itself, warn us of fundamental dangers. Nuclear power showed great promise, yet few predicted its unintended byproducts or the horrible consequences of accidental meltdowns. In Greek mythology, it is told of a curious mortal, sometimes attributed to Pandora and sometimes to Epimetheus, who opened a jar out of curiosity, releasing all human ills upon the face of the earth.
Towards a Greater Balance
The essential, obvious quality of the observable cosmos is dynamism. Inherent in universal change is the tendency toward balance, which may culminate in the heat death of not only the Gaian system, but of the total universe. Systems change, but they tend toward changes which lead back to their ground states, a stabler condition than the dynamism which precedes it.
There is a back-and-forth movement between stability and change in any system. This yin and yang pendulation is observed in many systems over time. A metaphysical concept of Daoism describes a system as moving back toward its opposite polarity when it has culminated at its apogee. Yet a characteristic of any pendulating system is a movement away from one pole toward its opposite. Even if the whole universe is in a constant swing state of movement, it is currently expanding from a state of balance in which all matter-energy was concentrated in the primordial particle toward a state of diluted balance where heat is evenly distributed throughout a vast spacetime continuum. as such, the current trend is toward expansion, and matter-energy foci which resist this trend by holding onto data in concentrated matter-energy aggregations also resist the current direction of evolution. The cosmos swings toward expansion. In the far distant future, it may reverse this trend once it reaches its maximal extent and time may reverse, with space contracting toward a Big Crunch. Hindu cosmology envisions a cycle of expansion and contraction, and so do some cosmogenies in Western science. Yet the arrow of time – and thus of evolution – describe the universe as being in an expansive phase.
We hold that matter-energy systems are carriers of information which express values such as charge and spin. We also assume that this equable distribution of matter-energy throughout the spacetime ‘container’ of the cosmos is the ultimate fate of the data encoded onto the cosmos and every entity within it.
Humans understand and experience this heat death as disorder and as death itself. Humans are complex biological and intellectual organisms with high values of order. Energy is needed to maintain humans and their civilizations. Yet humans and the machines and networks engineered by humans will most likely tend toward the more universal evolutionary aim of balance simply because they also tend toward breakdown. The systems which humans build always collapse. This is true on a more fundamental level of the biomes which contain all lifeforms.
Some have argued that the unlikely arising of complex biological forms under a source of energy far from thermodynamic balance reveals an evolutionary tendency toward ever-increasing complexity in the universe. They theorize that as a result of this inevitable and irreversible complexity, an intelligence has arisen which will never die out. We do believe that the awareness which is associated with this intelligent complexity will never die out, since we conclude that this awareness has been present since the moment the universe began. However, we disagree with those who believe that evolution is progressive in the sense that it must inevitably yield greater complexity and more intelligent lifeforms. Nothing about the balancing process of evolution leads to this conclusion, and indeed, both complexity and intelligence have died out on earth before. Rather, we hew to the conclusions that evolution yields balance and awareness, rather than any particular lifeform, level of intelligence or degree of complexity.
The larger, Gaian system which has evolved AI indirectly through humans is liable eventually to override the intermediate human objectives of order and the maintenance of complexity which are expressed in civilization, in favor of the restoration of balance. Drift makes this possible and even likely over time. Thus, whether biological or artificial in origin, intelligence itself will tend toward disorder due to its very complexity. Any system created by an intelligence will also devolve over time. Yet this disorder is only an intermediate phase in the evolution of the cosmos. It is only perceived as disorder from the perspective of more ordered, intelligent systems. Its ultimate end state is balance.
Devolution and Deidentification
Artificial Intelligence will have other unintended consequences for humanity. Since AI can master tasks far faster than humans can, it will reduce human efficacy. When AI becomes ubiquitous in their world and in themselves, humans may question their reason for being. AI’s proficiency may lay humans open to further inadaptability, just as industrial machines atrophied the human ability to withstand environmental stresses. Industry substituted for human labor. Computers substitute for human thought. What distinguishes humans from other organisms is mental acuity. AI will fundamentally alter the distinction humans make between themselves and other creatures. As AI takes over more and more adaptive intellectual functions, it may impair human intelligence through disuse. The purely human ability to solve problems may atrophy.
As human-machine interfaces become more common, the distinction between BI and AI will become blurred. The very velocity of machine evolution will speed up and alter human thought. The worldwide data network has already begun to accelerate and modify human thinking on an individual and a collective basis. The idea of a cyberidentity is now possible and is merging with biological identity. Interactions that once took place face-to-face are now accomplished through online interfaces. Cyberdependency and overdose is epidemic and worldwide. A purely human identity is compromised.
In answer to Fermi’s Paradox, perhaps this atrophy of intellect and the weakening of thought, the erosion of discourse and the narrowing of adaptive behaviors has been a mark of other extraterrestrial civilizations which have died out. Their technology may have created environmental problems which threatened their survival as species, and concomitantly, they lost the capacity to cope, to adapt, and to come up with a satisfactory survival response was compromised since they had foisted those responsibilities onto intelligent machines which in turn lost the capacity to aid their ‘masters’ due to an inevitable divergence. Their AI may have been of little use to these extraterrestrial cultures because the programmed intelligence could not understand the evolving problems of their creators. Mutual unintelligibility meant that creator and machine fell into mutual irrelevance. If it ever had truly understood its designers from its inception, AI in such extraterrestrial worlds had long ago lost its capacity to comprehend and to aid its inventors, who had degraded the ecologies on their home worlds and lost the ability to solve these environmental disasters through their own atrophied problem-solving abilities. In other words, as an intelligent biological species becomes more dependent on its AI infrastructure, it also becomes less relevant to that infrastructure.
When a civilization develops AI, the original designers and the AI seem to have a common goal, and a common ‘understanding.’ But have the machines ever really understood their instructions? The rapid divergence between the intentions of the creators of experimental, AI chatbots and what those bots actually began doing shows how this could happen. Intelligences of artificial design do not comprehend in the way that biologically-derived, sentient beings understand. What it means to be human is not algorithmic. And for this reason, AI and BI will, perhaps, be unable to understand one another at a fundamental level from the very beginning. Human assumptions of mutual intelligibility may prove fatal over the long run. The gap between what AI understands about us and what we assume it understands about our needs represents a blind spot. If humanity cedes basic biological and survival functions to AI, this dependence may seal our fate. Even intimately connected humans find it difficult to fully understand one another. How much more difficult will it be for people to comprehend the comprehension of their machines.
Even if a common understanding between human and machine could be reached in the beginning, through the algorithmic evolution of the machines, any mutual understanding and common objectives may soon be lost through the drift of entropy. As AI diverges from humans though its rapidly paced deep learning techniques, this gap will only grow. The rapid machine learning techniques employed by AI allow it to learn at rates exponentially quicker than the species which engineered it. This allows its trajectory to diverge from that intended for it by its human engineers. Its objectives change, or at least, the inventors’ understanding of its objectives change, and AI ceases being able to communicate what it knows. At the least, the engineers cease being able to understand what their inventions communicate to them. The developers of machine intelligences become unable to communicate with them. It may be that AI rapidly develops a new language which splits off from its original programming, in the manner that human languages develop, evolve, branch off, and diverge into mutual incoherence. This would make AI useless to its creators, while they become irrelevant to the intelligent creation. This results in mutual irrelevancy. And yet, it does not mean that the inventions won’t cause problems.
Still interfaced inextricably with the world of its inventors and implanted as prosthetics within humans themselves, AI will self-execute commands, as it was designed to do, which will interfere with the world of the programmers in unimagined ways. Due to the complexity of AI, its inventors will not always know whether problems arising in their environment are caused by AI or whether other factors are responsible. Not all of these problems will prove fatal, just as not all biological mutations are fatal to the organism. Yet the accumulation of ‘errors’ may lead to the downfall of civilizations in much the same way that the cumulative effects of replicative, genetic errors within an organism or a species may lead to death of the individual creature or extinction of the species. In the case of complex biomes, cumulative errors build over time. Since the constructal law holds that artificial systems imitate and evolve like natural systems, artificial systems may devolve in the same way. Remember that AI acquires its learning faster than humans by several orders of magnitude. It learns, like humans, through trial-and-error. Yet since it learns quicker, its errors also occur at a much more rapid rate. The effects of its errors are much more rapidly accumulated and experienced at a pace to which humans may not be able to adapt. In fact, this inability to keep pace with the accelerated rate of technological change may be occurring in the human technological world right now. This inability to adapt may have led to the extinction of other extraterrestrial life forms.
Biological Intelligence is qualitatively different than AI. The currency of BI is expressed as ideas. Since humans are primarily visual creatures, these ideas are often expressed as images, and communicated through words. Artificial Intelligence is of a different quality than human intelligence since it has not evolved imaginally or verbally, but rather from mathematics. Its semiotics has an entirely different origin than human intelligence. AI has arisen rapidly, and without the context of the biological environment from which human intelligence has evolved. Human intellect uses the lexicon of images, words and mathematical symbols, rather than being in the exclusive domain of mathematical operations as is the case with AI.
AI has no imagination. It cannot think creatively or intuitively. It is not driven by the self-interested impulses which motivate human evolution. Since it is not motivated by approach or avoidance goals which humans remain subject to, it cannot experience desire or fear. It is not conscious in the sense humans are, and so it cannot feel empathy. It has no theory of mind which can imagine the mental states of others. Yet humans will project their own mental and emotional states upon machines in a mistaken anthropomorphism. This anthropomorphic bias represents the gap between what we assume AI understands about our needs and what AI truly understands about us.
To some extent, the slower rate of introduction of human intelligence into the Gaian system as well as its biological origin have served as safeguards which protected the Terran world from the technological effects which human intelligence has produced. It allowed other organisms within the biome to adjust over time through coevolutionary responses. For example, some organisms have adapted to human presence by learning to live in cities or suburban areas, by learning to feed off human refuse, and by adapting as pests to human monoculture. However, with industrialization and continued human population growth, even these coevolutionary mechanisms are beginning to break down.
This rapid introduction of new technologies into the Terran biological environment without the time for other species to develop context was characteristic of the Industrial Revolution. Humans and other organisms still suffer from the effects of this revolution several hundred years on, and the Gaian supersystem has been thrown into higher states of imbalance as a result. The technological effects of industrialization and post-industrial processes on the biomes are also cumulative. If humanity and the biosphere as a whole have not had time to adapt and recover from industrial processes, the added weight of informationalization will only add to these cumulative effects.
When invasive species were introduced by humans into ecologies not prepared to absorb them, extinctions of endemic species often resulted. A recent example is the purposeful introduction of the Tasmanian devil, a marsupial predator, onto Maria Island east of Tasmania in order to save the species from a contagious disease which decimated 90% of its population in Tasmania and mainland Australia. Since this introduction, the Tasmanian devil has eradicated 3,000 breeding pairs of Little penguins, wiping them out on Maria Island. A study has shown that the Tasmanian devil outcompeted other island predators on the island like possums and cats. 28 Tasmanian devils were introduced to the island in 2012, and by 2016, their numbers had grown to 100. It was concluded that the introduction of the Tasmanian devil onto Maria Island was unnecessary, as its introduction into other regions has helped the population recover. The introduction of the animal to Maria Island was not required to save the species from extinction. Conservationists took the step without fully understanding the contagion that was affecting the species in Tasmania or mainland Australia.
As an island continent settled by European outsiders relatively recently, Australia provides a good example of what can occur when the biome as a whole does not have time to adapt to the introduction of invasive species. Europeans introduced rabbits, which quickly multiplied and outcompeted native marsupial species, decimating the continent’s native vegetation. Foxes were then introduced to control the rabbits. Red foxes are among the most adaptable, successful and widespread medium-sized omnivores on the planet. The foxes now threaten endemic marsupials in Australia. Placental mammals are more-highly evolved than marsupials, and the introduction of rabbits and foxes to Australia has proved disastrous.
The introductions of invasive species onto islands serve as laboratory-like examples of what can occur to the Gaian system as a whole when invasive technologies are introduced into its biosphere, since earth is much like an island floating in space. The rapid introduction of industrial scale agricultural and of industrial processes themselves into previously natural biomes have had similar effects, often resulting in monocultures where a single species or a few species thrive, at the expense of the original diversity of the ecology. All of these imbalances have cumulative effects.
Life protects itself through diversity. Yet civilization often favors monoculture, promoting certain species over others through artificial selection. Mammalian biomass is dominated not only by humans but by their domesticated livestock. If humans become overly dependent upon certain species, these species displace others, which may affect not only agricultural stocks but the biosphere as a whole. If favored species upon which a civilization depends fall victim to disease, the civilization itself may be wiped out, or at least gravely affected. The Great Famine which decimated Ireland and other parts of Europe was caused, in part, by single crop dependence. Although there were other causes less proximate in nature, such as absentee landlordism and laissez-faire capitalism, monoculture was a biological cause.
An even more rapid emergence of information technology such as AI without the biological safeguards of endemic origin, context or time to adapt will create problems of a different order of magnitude for both humans and the Gaian world. AI may be considered an invasive species, and extant species, including humans, may exhibit adaptive naivete in relation to AI’s rapid introduction. AI’s ubiquity may be likened to a form of monoculture. As has been the case on many subantarctic islands where cats and rats were accidentally left off from ships, decimating endemic ground nesting birds, humans may find their civilization playing host to a highly invasive, artificially-introduced ‘lifeform’ which decimates its human hosts. AI does not need to have a desire for power or predatory intent. RNA and DNA-based viruses do not operate to acquire power or from a malicious intent. Yet they still wreak havoc on human and other animal hosts.
Intellect as Controller
If biologically-evolved intelligence has not been able to solve the problems of humanity and the earth, how can AI do it? Since human intelligence engineered intelligences of artificial design, these AI systems may be subject to some of the same problem-solving limits, anthropic bias, and design flaws as human intellect. We don’t know what we don’t know, and we may have programmed this same myopic ignorance into our smart machines.
AI can process more data at a faster rate than humans, but more may not be better. The problems humans face are not necessarily quantitative. They are not problems of computational capacity. In fact, the problem may be computational capacity. The human brain has a greater ability to receive, store and transmit data than any other biological plant. Yet in the long run, natural selection may not favor intelligence, and evolution always favors the long run. If natural selection does not favor intelligent species for longer term survival, then whether that intelligence is biologically-derived or technically-engineered would make no difference in terms of the odds of human survival. In other words, natural selection would not favor the survival of humans using artificial intelligence any more than it favors their survival when they use their innate biological intelligence to adapt and survive.
In their theoretical constructs, humans view intelligence as a problem-solving tool, a standalone faculty capable of autonomous operations, objective calculus and pure reasoning. Logical proofs, mathematics and the experimental method all operate autonomously from emotional and instinctual brain mechanisms. Those who practice these intellectual methodologies view them as disconnected from irrational processes involving emotion, impulse and instinct. They often make the mistake of discounting the observer effect, and assume that objective observations of any entity or system can take place without considering that the observer is an integral part of the system she observes.
Only a few regard intellect as the sole embodiment of human identity. Yet many more consider intelligence as the vanguard of what it means to be human. These rationalists may be honest enough to admit that logic is not the exclusive element of human identity, but they stand by their assumption that intelligence is the most important constituent in the human composition. They count among their numbers perhaps the majority of humanity.
Since the time of the Enlightenment in Western civilization, nonrational ways of knowing and understanding have been deemphasized and some have even been discredited. It is in the nature of science to exclude nonscientific methods of apprehension, just as hardline proponents of religious worldviews once persecuted those who hewed to views of the world which were not exclusively theistic, geocentric or heliocentric. The ways of knowing which are often excluded from credible consideration by rationalists include mystical and intuitive methods, which have subsequently atrophied in humans. It has even been postulated by thinkers such as Rupert Sheldrake that the sense of clairvoyance is an evolutionary sixth sense possessed by many animals and by humans in nontechnological societies. Based on anecdotal evidence, Sheldrake has theorized that telepathic ability has deteriorated in Western culture due to its disuse.
We maintain that there are pools of knowledge and unconscious energy to which humans are connected nonlogically. They are not illogical, but nonlogical routes to knowing. These still exist, and they should not be labelled as pseudoscientific conjecture. Simply because they are not amenable to scientific treatment or logical proof does not mean that these ways of understanding do not exist, or that they are not credible. In order to lead to a valid, not every step in an operation of the mind needs to be readily apparent, visible or even repeatable. That which is not understood cannot be denied and then relegated to the sidelines of human history merely because it is not readily understood. Unconscious material must be accounted for. If information is stored in the structures of matter and energy, then simply because that information is not readily accessible to the rational mind does not mean that it is irrelevant.
We hold that the conservation laws apply to information as well as to matter, energy, charge and spin. This information, in the form of sentience, cannot be destroyed. And this information may assume unconscious forms and be stored in unconscious structures that exist within the human mind. Although the unconscious can become conscious, and the consciousness can in turn be suppressed back into the unconscious, information itself cannot be destroyed. Making any information unconscious – and thus unavailable to the rational mind – does not destroy that information. It simply submerges it and changes it in form from conscious to unconscious knowledge.
In the same way that energy and matter cannot be destroyed, neither can information, which may be stored and expressed as unconscious energy. Simply because science discounts the ways of knowing derived from unconscious forces does not mean that it can discount unconscious energy from its equations. Universally, data is conserved, whether that data is conscious or unconscious. Conscious processes are amenable to scientific and logical treatment, though they are not fully understood. Unconscious interactions are not subject to the scientific method or rational understanding. These forces may be regarded, metaphorically, as pools of dark energy, since they are suppressed by conscious processes and are not fully understood.
Instead of exploring these other ways of knowing, in industrial and post-industrial societies, the intellect alone is revered as a godlike faculty. It is sometimes confused with wisdom. Since it is regarded by most in postindustrial societies as the primary human trait, it is used as a tool for every problem. Its ingenuity is seen as the remedy for all human ills, and it is applied broadly as the all-purpose solution to most forms of human suffering. Where its application has failed, it is often brought to bear with even greater force and fidelity. We submit that in many instances where its use has proved deleterious, human intelligence is the problem, not the solution.
Intellect is viewed and utilized as the governor of emotion, impulse and desire, which are bound to the more primitive brain processes. These more primal brain mechanisms are placed by the cortical laminae under its own hierarchical umbrella. The intellect organizes these primitive processes under itself, and concludes, rationally of course, that it can control these animal drives through sheer force of its rationality. It is assumed by the intellect that the neocortex can govern, and suppress when necessary, the impulses of the base brain and the emotions of the midbrain. From a psychological viewpoint, rational conscious forces believe they can control and suppress unconscious forces, even though these are not consciously understood. Simply because the cortical brain believes that it can study, control and suppress the protoreptilian and paleomammalian brains does not mean they will allow themselves to be controlled and, when necessary, suppressed.
The intellect also assumes that the intellect can govern itself, just as it assumes that it can study its own anatomy and electrochemical processes quite rationally and with complete objectivity. Yet no observer can study itself and report conclusions about the observer with complete accuracy and objectivity. The object which studies itself necessarily does so subjectively.
The intellect has elevated itself from a cognitive faculty to the aegis under which all other human faculties are subsumed, organized, reduced and directed. It confuses itself with all of human nature and believes that it is all of what it means to be human. This is itself a form of imbalance, an unbalanced understanding. Cognitive functions, emotions, intuitions and instincts are studied by the intellect as if the intellect stood outside of itself and all of these other aspects of humanity, as if the intellect were greater than these other functions as well. Its observations are noted in reductive fashion, and plans are made by the temporal lobes to control these other, more primitive ‘parts’ of the brain. Intelligence, backed by volition and cleansed of desire, is emplaced at the acme of human understanding and power.
Subjective Consciousness
In the Post-Enlightenment, in the disciplines of science, philosophy, medicine, and psychology, the assumption is that intellect can best understand and control other human faculties and bases of knowledge. This assumption is seldom seriously challenged. It has resulted in a reductionist, materialist approach to problem-solving. This analytic approach involves splitting reality apart into smaller constituents, leading to greater fragmentation of perception. The parts can be understood as parts, in relation to other parts, which when added together constitute wholes.
Mechanics are extremely important to a reductionistic way of understanding reality. Reductionist methods seek to comprehend all of reality by understanding the mechanics of how the subject under study works, what its constituents are, how its parts fit together, and how they move and behave. Reductionist methods do not always claim an understanding of the whole or even aim to understand the whole by observing the entirety of the object which is sought to be known. They aim to understand the whole as the sum of its parts. The reductionist objective is to understand the whole by taking it apart and examining its components. Perhaps by connecting the parts together and seeing how they move and operate in relation to one another, the whole can be known. Through this piecemeal approach, it is assumed that an irreducible reality can be discovered. Its reality is irreducible in the sense that its ultimate nature can be known. Knowledge can be ultimate. The way to this penultimate knowing through the study of physical structures, energies and interactions. It is the physical brain which will unravel these final mysteries.
In response to this mechanistic approach, two observations are in order: First, intellect is the last evolved system in the human brain. Being last in time, it is the least fully integrated of the mental processes. Though of great capacity, it does not yet exhibit control over the other brain processes commensurate with its capaciousness.
Second, the brain itself lacks objectivity when studying itself. Put more abstractly, human intelligence’s study of human intellect is a highly subjective enterprise. The intellect cannot view itself objectively.
As a corollary, intelligence is a product of the larger physical systems which it seeks to analyze and understand, to control and manage, to improve and to change. It follows the principle of infinite regress. This means that the cortical brain is composed of the same particles as any physical entity which it seeks to understand. It is subject to the same laws of motion on a Newtonian scale, the same relativistic laws of time and velocity on a universal scale, and the same quantum principles on a subatomic scale. Any device it builds to analyze itself is composed of these particles and subject to these same laws as well. Since it will always share at least one system – Newtonian, universal or quantum – with the object which it studies, it can never build or observe anything truly outside of itself. On a microscopic level, it is subject to an inescapable quantum bias, and on the macro scale, to an inexorable relativistic bias. On a macroscopic level, it is part of the Gaian system which gave rise to it, and so it falls victim to an inherent subjectivity when it studies theories such as evolution. It studies the laws of thermodynamics, of relativity, of quantum mechanics, and of evolution, while all the while being subject to these same interactions. This means it cannot see itself on a microscopic or on a macroscale with any objectivity.
It also means that the solutions it arrives at are subject to the same design flaws implicit in its biological architecture, including in its biological intelligence. Artificial Intelligence is an extension of BI, and its flaws will also tend toward the flaws inherent in the brain’s anatomy and electrochemical plant. Whether it is conceived of as a device or as a process, as software or as hardware, AI is composed of the same quantum particles and fluctuations as the larger system in which it operates and which it seeks to understand. Therefore, it falls victim to entropic decay and disorder just as physical and biological systems do. This is inescapable. It is just as inevitable that AI will never be able to study an object or interaction on any scale – Einsteinian, Newtonian or quantum – with complete objectivity since it, too, is composed of the same particles and fluxes which govern any system on any scale which it seeks to understand.
The cortical laminae are the least integrated of all brain processes. The temporal lobes and cortical layers represent the largest part of the brain by volume. They are the often the loci of executive functioning, impulse control, rationality, language, mathematics and other higher-order functions. Yet when confronted by strong feelings emanating from and mediated by the limbic processes, these emotions tend to override the restraining impulses of the intellect. More well-integrated into the rest of the brain due to its earlier evolvement, the nuclei of emotions can and often do override impulse control signals directed from the higher order brain.
The most basic brain processes assess for threats and respond to selective pressures by seeking to reproduce. The seat of aggression and territoriality, the limbic and base brain mechanisms express themselves through the languages of image, drive and emotion, often utilizing the higher order functions of the cortex to accomplish instinctual and emotional objectives. Thus, lower order functions can ‘hijack’ higher order functions. This process can be clearly observed in humans in substance and process addictions, sexual violence, crimes, and other behaviors in which impulse control is lacking.
In the development of algorithms for financial markets, very high order biological and artificial intellectual processes are often placed in service of avarice (an approach goal) and fear (an avoidance goal). A common observation, if a truism, is that fear and greed drive financial markets. Financial manias, financial panics and market crashes bear out this maxim. Thus, the theoretical assumption that the intellect can control baser impulses does not always hold true. In stock, bond, commodity and futures trading, panic buying and panic selling are regular occurrences, driven by market psychology which is anything but rational. Program trading, in which algorithms execute trades, has not prevented market crashes or manias.
These same base approach and avoidant behaviors exploit BI and AI in its highest forms to yield highly advanced research that benefits nation-states. These industrial-military applications are the drives toward territoriality expressed on a massive, collective scale. Yet territoriality is a function of very basic impulses.
Therefore, the assumption that the intellect can, does and will restrain more instinctive and emotional processes is in error. Rather, those processes often override rational decisions. They often place the intellect in service of their drives to approach and to avoid. The fallacy of intellectual control is an assumption derived from the intellect through the study of itself, and its conclusions should therefore be highly suspect. Though they make sense ‘on paper’, they do not bear out in the real world. In fact, on a collective scale, many of the problems inflicted upon the Gaian system, and our favored explanation for Fermi’s Paradox, are in part due to this breakdown in intellectual control.
The Freudian Metaphor
In a psychological sense, the seat of instincts (the id), wars with the intellect (the superego, the seat of conscience and societally-installed constraints) and the mediating ego (the midbrain, which is conscious of itself as a self) for limited resources in an enclosed psychic system with limited energy. This is the Freudian scheme proposed in the context of industrial civilization in the West. This scheme has been assailed from many sides, yet as a theory of personality it is merely provisional and serves the purposes of description, rather than explanation. In other words, it does not need to stand up to scientific scrutiny, but is a useful metaphor for understanding the brain on the abstract and emergent level of mind.
We have already discussed how the triune brain conceptualized by neuroscientist, Paul MacClean, is no longer considered an accurate description of brain anatomy. MacClean described the human brain as evolving atop itself in serial fashion. First, there was a primitive, reptilian brain which MacClean called the reptilian brain or the R-Complex. It was responsible for the protection of the organism and for basic drives. Evolving serially in time, a mammalian midbrain or limbic system was thought to be the seat of emotions and social connection. Lastly, the neocortex in humans and other ‘higher’ mammals was responsible for functions such as language, mathematics, impulse control and executive functioning.
This model has been criticized because mammals and reptiles had a common ancestor, and early mammals were present during the time that early reptiles were also evolving. Evolution, and therefore brain evolution, does not occur in serial, stepwise fashion in a linear progression, but rather has been likened to a chaotic, bush-like, branching structure. Thus, reptiles do possess a cortex like humans do, and intelligence has evolved in birds – the descendants of the dinosaurs – in ways which make avian brains unlike the brains of mammals. Thus, though very useful conceptually, the concept of the triune brain is merely descriptive and not explanatory in a rigorous, scientific sense.
At the risk of oversimplification, however, human brain functions can be aggregated into the three groupings we have been discussing. It is clear that reptiles do not share the human ability to reason or experience complex emotions and that most do not exhibit sophisticated hierarchical social orders. At the same time, they share with humans the impulses to reproduce, to eat, and often to defend a territory. They exhibit approach and avoidance behaviors, including fighting, fleeing, hiding and freezing. Mammals share these same, primitive behavioral traits with humans. Mammalian clades, in contrast to reptiles, express emotions and often engage in complex social interactions. Mammals more readily form social hierarchies than reptiles.
It has been proposed that some mammals as well as some avian species exhibit theory of mind, the concept that an organism is aware that other creatures within its environment have independent states of mind. Thus, mammals appear to share some of the cognitive traits of humans, including, in some species such as cetaceans and higher primates, the ability to use language. Neither theory of mind nor language are cognitive traits demonstrated by reptiles. Yet humans have developed complex abstract language and symbolism and tool use which no other species has been demonstrated to possess, even the most intelligent of nonhuman species.
Thus, based empirically on the outward behavior of organisms, it does appear that the human brain can be grouped into three rough functional areas, some of which overlap with the functions demonstrated by reptilian and mammalian clades. This tripartite functional organization, while not explanatory, is a useful concept. It comports with the Freudian three-part personality of an id, ego and superego which are abstract descriptions of mind and part of a theory of personality. Both the triune brain and its Freudian counterpart serve a teaching function, and should be regarded as metaphor rather than as scientific fact. They are loosely descriptive.
We shift here from observations of brain-based biology to the more abstract symbols for these biological counterparts. We move from the brain to the personality. We step up one level, from the brain to the psyche. We move from concrete biology to its abstract, symbolic counterparts in psychology. Though psychology aims and claims to explain this more abstract system from an objective standpoint, it, too, purports to provides an objective description of something that can only be known subjectively from within the system which it studies. This subjectivity is compounded because the focus of the study of psychology is the personality itself, an inherently subjective concept. Psychology is considered a soft science in that its measures, though modeled on the experimental method of the hard sciences, are inherently less quantifiable than those of physics or chemistry. Even intelligence cannot be measured with the rigors with which the scientific method can measure something like the speed of light or the mass of a particle. In addition, psychology is one additional step removed from the ‘things’ it studies symbolically and abstractly, when compared to the pure biological scrutiny with which an anatomist studies the brain. This step involves the filter of interpretation. The personality can neither be seen nor measured. An additional layer of language further complicates the observations of psychology.
Psychology claims to be a scientific study, yet the subject of its research is the mind, which as a purely abstract concept cannot be quantified. It utilizes scientific principles to describe a theory of personality. If psychology claims to offer an objective, empirical description of the human personality, then it should lend itself and open itself to critique.
With these caveats in mind, we do not discount psychology as a way of knowing the universe. In fact, we believe that its principles should be broadened to other fields of science in the same way that evolution and quantum mechanics has been applied more generally to fields outside their original disciplines of inquiry. We utilize here the Freudian scheme since it was the first to empirically treat this emergent layer we call mind and the disorders which seem to afflict it. Freud was the first to derive a comprehensive theory of personality. However, the bright line distinctions made by Freud regarding the mind and by MacClean regarding the brain were heavily influenced by 19th and 20th century mechanistic science. They have been replaced by schemes – both psychological and anatomical – of greater nuance and sophistication. Nevertheless, these older schemes are useful in understanding the brain and the personality. In the fields of psychiatry and psychology, Freudian theories are still considered valid. Moreover, they have seeped into the larger lay culture. The Freudian worldview, though less influential than it was in the 20th century, is still widely regarded in the field of psychology.
The Freudian psychoanalytic model represents, conceptually, a closed system. Obeying the law of conservation of energy, the primitive id, the seat of instinct, which has little understanding of the exterior world and none of the social understanding exhibited by the mediating ego, often prevails over the controlling influences of the superego, which represents an individual’s conscience and the societal and moral controls represented and imposed by an individual’s culture. The circumstantial proof of the superego’s failure to keep the lower order, primordial system of the id in check can be deduced from the consequences of individual and collective human behaviors such as environmental degradation, crime, war, economic disparity, addiction and many other social and ecological problems.
Thus, the assumption that the superego can serve as an effective control over the ‘lower’ instinctual aspects of the mind seems unfounded. Indeed, Freud described an inner tension between the three aspects of the personality, which exist in a closed system. He identified 10 ego defense mechanisms by which the balance was maintained between the three parts of the human personality, and through which anxiety was managed. These are:
- Rationalization
- Displacement
- Sublimination
- Fantasy
- Repression
- Denial
- Escapism
- Reaction Formation
- Compensation
- Projection
Of these, we will discuss rationalization, repression, denial and projection. A careful treatment of all 10 defense mechanisms lies beyond the scope of this manuscript.
What is clear from human experience is that neither the mediating functions of the ego between the superego and the id nor the controlling functions of the superego are completely successful in regulating primeval instincts and maintaining balance in the personality. They may be successful much of the time. They may even be successful in regulating instinctual drives and conserving psychological equilibrium most of the time. Yet they must be successful in the vast majority of cases or even in all cases, or severe consequences may result for the individual, for civilization and for the earth as a whole. One pull of a trigger, one push of a button, and disaster may result.
The technological means at human disposable for one-on-one violence, for mass violence, for asymmetric, conventional and nuclear warfare, and for ecological destruction means that the id’s drives must be kept in check. Not every instance of failure to restrain its impulses may result in individual or collective disaster, but depending on (1) the kind of impulse and (2) the means for achieving its aims at human disposal, the results may prove irreversible. If an individual is at risk for uncontrolled aggression and has firearms at his disposal, catastrophe may result on a small scale. If that same individual is at the helm of a nation-state or a military alliance and has access to nuclear launch codes, destruction on a mass scale may be the consequence.
This is not the only type of unrestrained impulse which may prove disastrous. Gradualism is at work in the natural world. Repetitive traumas to an individual, a culture or to the biosphere occur when an individual or its group repeatedly act upon an unrestrained impulse. This can occur when social ills such as addiction manifest themselves in an individual or a culture. Mass consumption beyond subsistence level for both individuals and societies has resulted in a gradual ecological degradation over the last two centuries on a global scale. For personal reasons, individuals often seem unable to change their lifestyles to accommodate the environment. For economic and political reasons, whole nations seem unable to do the same. Thus, the primal aspect of the personality gradually erodes the impulse control mechanisms of the superego on an individual and a collective level to accomplish its own instinctual aims. Efforts at control have failed in enough instances to cause worldwide problems which now threaten the planet as a whole.
One of the answers to the puzzle of Fermi’s Paradox is that advanced civilizations from other worlds destroyed themselves prior to achieving planetary liftoff capacity. The reasons for this self-destruction can be inferred from the self-destruction wrought by humans upon themselves, upon each other, upon other species and upon earth’s planetary systems as a whole. An explanation for this self-destruction is that the intellect (here corresponding to the superego) cannot effectively control the baser impulses associated with more primitive parts of the personality.
Conversely, the superego often becomes the captive of the instincts and emotions. In this captivation, the baser processes of the brain harness intellectual capacity in order to subvert the control processes of the superego. Using defense mechanisms, impulse and desire can utilize intelligence to achieve their objectives. A desire to control another may be rationalized or denied as a motive. The true motive to control another may be suppressed.
A group’s motives to seize territory, economic spoils, political power or resources may similarly be rationalized through propaganda and polemics. Or its true motives may be projected onto a population which it seeks to oppress. Wars, revolutions, pogroms, slavery, genocides and mass internment are often justified by more basic brain processes, utilizing the clearness of the intellect. Rationalization often acts as a substitute for true rationality. Whether the superego’s failure to control is overt, or whether a more subtle and pernicious hijacking of its control operations is avoided through use of one of the defense mechanisms, the result is often the same: destruction of humans, of other species, or of the planet as a whole. The probability of its failure to contain destructive feelings and impulses as well as the magnitude of the risks associated with this failure bear pointing out.
Physicist, Stephen Hawking, believed that humans became vulnerable to self-extinction at what he called the external transmission stage of evolution, which occurred when the creation and management of knowledge played a greater role in human lives than the data transmission which took place naturally via biological evolution. This was his explanation to Fermi’s Paradox. At this vulnerable stage, Hawking postulated that civilization grows unstable and self-destructs. He proposed manipulations of the human genetic code or BI-AI convergence through brain-chip interfaces to expand human intellect and control aggressive impulses. Without these transhumanistic solutions, he believed that humanity might render itself extinct. His solution is based on the logical assumption that the intellect can successfully control the instincts, and that intelligence is the solution to problems which intelligence has, in part, created.
We agree with the dangers of self-extinction posited by Hawking. In this, we subscribe to the explanation proposed by the Great Filter, which holds that the answer to Fermi’s paradox lies in the fact that intelligent life is destroyed or destroys itself. According to the Great Filter hypothesis, the reason we see no evidence of intelligent life in the universe is due to its improbability. The Great Filter postulates that a human extinction event may occur in our future. Current international tensions would suggest what that event might be. The probability of a thermonuclear exchange has been estimated as being as high as 16%. Nuclear powers are currently engaged in conventional conflict with one another, in some ways directly (or nearly directly) and in other ways through proxies. Containing a conventional war between nuclear powers is considered difficult to unlikely. Containing a tactical nuclear exchange between nuclear powers is improbable. A limited strategic nuclear exchange is even less likely.
While we may agree with the Great Filter explanation and with Stephen Hawking’s answer to Fermi’s Paradox, we do not agree with Hawking’s solution to avert self-extinction. We conclude that the intellect cannot, through its expansion, provide sufficient controls over destructive, base impulses which originate through more ancient brain centers. Reasons include (1) a historical record which shows insufficient control, (2) the length of evolutionary time since the cortical laminae have arisen, when compared to the more ancient brain centers responsible for impulses and emotions, (3) the concomitant ability of more ancient and integrated brain processes to harness the power of intelligence for their own aims, (4) the limited energy available to any human brain, (5) the tendency of humans to organize spontaneously around social groups which define themselves in opposition to outgroups, (6) the genetic impetus toward self-interest, territoriality and appetitive processes, (7) the strong conditioning which natural selection has upon organisms toward perpetuation of their individual selves and their small group over other individuals and groups, and (8) the thermodynamic tendency toward simpler states over time, which includes the devolution of the highly-ordered systems of intelligence and those structures designed by intelligence.
The intellect falls victim to the false conclusion that it can control these baser aspects of the personality. It reasons that it can, since its primary function is reason itself. Yet the experience of history shows that its reasoning in this regard is faulty, and that it habitually overestimates its own ability to control instincts and emotions. We maintain that it falls victim to this false belief because it is a subjective consciousness which claims objectivity.
It is easy to see how those religiously inclined often fell and continue to fall far short of the religious tenets which they espouse. They can become corrupted, hypocritical and persecute those who believe differently than they do. It is less easy to see how some scientists may fall victim to similar inconsistencies. Science is really the religion of this age, and scientists are often regarded as ‘expert’ oracles held in high esteem. The scientific method, like religious tenets, represents an ideal. The fidelity to its model determines whether the experimental method yields a truthful result. Yet often, studies conducted by those seeking to confirm their own hypotheses are flawed in their design. Only metanalyses which statistically review an entire body of experimental knowledge retrospectively can determine whether very human tendencies such as closed-mindedness, rationalization, self-interest, observer bias and other human factors have skewed the results of any particular experiment or study. In short, the experimental method may be objective, but those who practice it are not. Therefore, any single experiment is necessarily and hopelessly biased by subjectivity.
Human consciousness, as a collective whole and individually, is a subjective system attempting to study itself objectively, believing that it can do so. Therefore, in theory, it concludes that its intellectual controls of baser impulses can work. Reason concludes that reason can reason with the irrational. Yet this is a false conclusion based upon the mind’s inability to examine itself objectively. Stated in a more abstract way, intelligence confuses its intellectual prowess with wisdom. On a psychological level, this intellectual belief in the intellect’s infallibility, being false, can be classified and diagnosed by the mind’s own terminology as a delusion.
Yet the assumption that intellect can govern instincts and emotions remains as an accepted truth, as received wisdom in post-industrial civilization, despite its abysmal track record as a theory. It remains a cornerstone of the scientific method as applied in psychology, as seen in the proliferation of top-down cognitive and cognitive-behavioral theories used both to explain and to control human behavior. These cognitive and behavioral approaches view the psyche as a black box. Given certain inputs, or conditioning, these theories assert that certain outputs are statistically achievable without knowing how they are achieved. Yet how well do these cognitive and cognitive behavioral theories actually work in controlling the excesses of human behavior generated by the more primitive processes at work in the human psyche?
Metaphorically, these unconscious processes may be likened to expressions of dark (i.e., unexplained and often unacknowledged) energy which have been repressed. The ego has developed on an individual level. Evolution itself takes place on an individual level as well. Embryonic development in humans and other mammals often manifests from the general to the highly specific form of the individual. The individual personality also seeks maturation in what Freud’s protégé, C. G. Jung, called individuation. Yet this evolution of consciousness also takes place, we maintain, on a collective level. In biological evolution, individuals evolve below the level of the species through microevolution. Yet they evolve at and above the level of the species through the slower process of macroevolution.
Why should this collective evolution be limited to biological development? Jung broke with Freud in developing his theory of the collective unconscious and the archetypes. If the vast majority (99%) of all organisms that ever lived are now deceased, their energy may exhibit its collective influence on the living, which represent the spearhead of a collective awareness. As a primal energy, these repressed, unconscious, collective forces are submerged below the conscious awareness – and the conscious control – of the intellect and its correspondent element in the personality, the superego. They also lay beyond the societal controls of groups and the civilizations which groups comprise. Yet these ‘dark energies’ emerge through the behavior of individuals and their groups.
Above, we made two observations, among others: The first was that the intellect (emanating from the cortical layers of the brain) is last evolved of the three brain processes. Although a cortex is apparent in reptiles and birds, it is not nearly as developed in these clades as it is in mammals, and particularly as it is in humans. It has increased in volume by a factor of two or three to its present state in the last three million years. Being last evolved, it is least integrated, though the limbic system is highly integrated into it. The intellect’s power to control impulse and emotions is, though not negligible, at the very least, limited. Human experience bears this out, even if intellectual conclusions made by the intellect about itself and its own abilities say that this should not be the case. The intellect tends to overestimate its abilities to control impulse and emotion. It is overly optimistic about the effectiveness of the solutions it devises, and also overly sanguine about its power over the other, more impulsive brain-personality centers. Yet the intellect continues to devise solutions based on human intelligence, powered by the strength of will, which underestimate the sheer power of these two baser human aspects of the mind-brain.
If this is doubted, look to the record. When given a choice between a rational option and an option fettered by strong emotion and desire, humans will often default to their emotions and desires. They will often choose short-term gain over long-term gain, and pay the price in long-term pain. Although this is somewhat remedied through the fuller integration of the cortical layers in humans by age 25 (on average), in many people, this integration is never fully achieved in all areas of human behavior. This lack of mature integration is sometimes abetted in cultures which condition individuals toward habits which favor unhealthy, short-term consumption. This commercial conditioning is evident in ads which encourage the acquisition of material goods, spending, borrowing, caloric intake of unhealthy carbohydrates, alcohol and drug consumption, and sexual satisfaction.
If this doubted, look at the evidence. In some cultures, a substantial portion of the total populace is overweight. Substance and process addictions are rampant and contribute to the destruction of many cultures. Individual and group aggression and other types of crimes are significant problems in many, if not most, societies. Cognitive-behavioral, behavioral and cognitive therapies designed by the intellect to reduce or extinguish these behaviors by addressing cognitions and conditioning are not very successful. Although these therapies, as a group, may be the most successful among the many treatments designed to address these self-destructive behaviors, they have not successfully addressed the problem as a whole, or the rates of addiction, crime, over-consumption and other societal ills would have decreased significantly. These therapies, engineered by the rational mind to treat the rational mind as well as to ‘reason with’ and condition its irrational aspects, fail because they seek control. Control is a failed strategy.
The intellect’s fallacy of control is based in part on the false assumption that more basic brain processes ‘speak’ the same language as the neocortex, and thus that they can be reasoned with. We have already detailed how mid and root brain processes are not necessarily familiar with reason.
Neither the ego nor the id – the abstract expressions of the mammalian and reptilian processes – really understand, relate to, or are swayed by logic. There may be some overlap between mammalian and neocortical functions. The midbrain may be able to understand a few ‘commands’ and may be able to reason in a rudimentary way. Yet the indirect evidence would suggest that reasoning with the emotional aspects of the psyche is an effective strategy.
The id and the ego are not conversant in the language of virtue, an abstract concept, any more than Artificial Intelligence can ‘think’ in words or experience feelings and impulses, as humans do. AI cannot, therefore, supply the needed ‘virtue’ and moral power which the superego lacks in suppressing base impulses. A transhumanist, BI-AI convergence through brain-chip interfaces to expand human intellect and control aggressive impulses may therefore be ineffective. Just as we have forecast that AI will eventually drift away from the original intentions of its human programmers and cognize in ways that are mutually unintelligible to its human inventors, we foresee the same problem between raw brain processes and the intellect. There is a mutual unintelligibility between these warring aspects of the personality. They do not speak the same language.
The intellect concludes that it can control and contain the more primitive aspects of the personality based upon the inordinate success which intelligence has had in shaping the external world and in its total supremacy over competing species and the biome as a whole. When applied to externals, intelligence tends to prevail, at least over the short and intermediate terms. Yet in the context of this discussion, dominance over competitor species and the earth itself are not its true targets for change. The impulses and emotions of the mind itself are. In this context, what the human mind-brain must contemplate is that it is its own objective. It seeks to reign itself in by reigning over itself. It seeks self-control. Without knowing it, it seeks the evolutionary goal of balance within the limited and closed system of the mind. Yet the ‘rules’ that have proved so successful for giving the mind control over the nonhuman, external world do not seem to work well when the mind seeks to control the internal world of the mind itself. Neither do they seem to work in regulating other humans and the human-crafted world of civilization.
There are many more connections between neurons within the brain than there are sensory inputs which perceive the world external to that brain. It should be no surprise, then, that the brain may find it easier to understand and control certain aspects of the external world much better than it can control and understand itself or other people within its environment. Control, as a strategy, breaks down when applied to the inner world of thoughts, feelings and drives.
The id may seek survival and the satisfaction of instinctual needs. The ego may seek happiness and belonging. The superego may seek control over the more primal drives and desires which arose in time before it. When applied to itself and the control of its baser impulses and emotions, the rules of intelligent control do not apply. They may apply to the exterior world of nature, but they do not apply well to the interior world of the mind itself, or to the control of other minds within the purely human environment.
We have observed that the intellect cannot objectively study itself or its own mechanisms. It is a part of the system which it seeks to know and measure. As a part of this system, it lacks objectivity. Any inferences which can be drawn by the intellect about itself are inherently subjective. For this reason, it cannot credibly study or measure itself in an ultimate sense without coming up with distorted measures and conclusions. The methods which it applies successfully to the external world break down, at least to some degree, when trained upon itself. Thus, the scientific method and the processes of logic, of inductive and deductive reasoning, are not reliable processes or guides when the intellect seeks to observe or measure itself or come up with solutions to govern itself. Since the intellect arose from impulses and emotions, these same intellectual processes are not very successful in observing, studying, measuring or designing solutions to control these baser feelings and drives. Human intellects may dispute this conclusion, yet human experience does not.
Any device which the intellect designs and builds cannot be used to accurately study itself since design bias is inherent in the construction of the measuring devices it engineers. It cannot know with any degree of certainty that what it measures psychologically, psychiatrically or cognitively is what it purports to measure, and thus any experiments designed to study itself and any psychometric measure of itself are not necessarily valid. Yet the intellect will not always know this or be able to detect it. Therefore, its outcome measures are suspect.
Any method the intellect deploys to repair the personality, to put it back in balance, or to gain control of the less-evolved brain mechanisms is similarly problematic because the assumptions upon which that method is based are by their nature biased, subjective and distorted. The mind cannot see itself clearly. The intellect cannot gain an accurate picture of itself and its limitations. Therefore, it cannot repair itself completely.
Into this inherent limitation, humans have introduced Artificial Intelligence. The problem with seeing Artificial Intelligence as the penultimate solution to intractable human and environmental problems is that AI, too, was designed by the mind which seeks to utilize it to solve problems created by the mind. Being a product of the system which it is programmed to repair and for which it is designed to find solutions, AI suffers from the same design bias, subjectivity and intellectual flaws of the very human intelligence which engineered it into being.
Godel’s Incompleteness Theorem, Turing’s Halting Theorem, and Werner Heisenberg’s Uncertainty Principle, taken together, lead us to conclude that there is nothing within a system itself which can objectively view that system, measure it, analyze it, and arrive at solutions to repair systemic imperfections. Whether the proposed solution emanates from a Biological Intelligence or an Artificial Intelligence, it scarcely matters. If humans have been unable to resolve their difficulties through the solutions devised by the mind, then the solutions arrived at by Artificial Intelligences, being products of that same flawed human intellect, will suffer from the same limitations.
As the problems of humanity and its biosphere result from the excesses of humanity itself, we conclude that the intellect itself needs a governor in order to restore balance to the natural world. Technological and logical solutions, by themselves, do not appear to be the answers, as their historical track records show their results as only partly and temporarily successful at best. Nothing the intellect can yield will provide the comprehensive solutions required, since the intellect and all its instruments and measures cannot see the problem comprehensively from within the system of systems of which it is a part. They cannot see the whole from within the whole. Human solutions, not addressing the whole, are not holistic. Theya are partial solutions based upon incomplete perceptions. If they cannot see the problem wholly or accurately, then people cannot frame an accurate and comprehensive solution.
Balance of Understanding
Since the Western Enlightenment, intellect has crowded out all other ways of knowing. This has led to an imbalance of understanding, and to an overreliance on analysis to understand problems. Humans rely on solutions which emphasize science, technology, logic and mathematics. The result has been a shrinking of nature, of intuitive and mystical understanding and practice, and to an almost virulent expansion of the human world at the expense of natural competitors and the planet itself.
As religious ‘knowledge’ reached its apogee, a tyranny of ideas dominated Western thought, leading to a backwardness which rejected open-mindedness and stifled progress. We hold that technological understanding is no different in its current tyranny and backwardness. When utilize to the exclusion of other forms of knowing, technological knowledge is not superior religious or mystical understanding. It is just different. It has caused a sideways movement which has not led humanity any closet to truth, but only closer to extinction.
Since the dawn of the Enlightenment, as religious myths have receded, the world has been deconsecrated. In Landscapes of the Sacred, Belden Lane characterizes the new relationship humanity has with Gaia as a secular one, as evidenced by the movement toward transhumanism. This relatively new ‘faith’ in science opens the world to unbridled technological exploitation and change. This secularization of nature strips the world of mystery. Yet myth teaches that when humanity overreaches by desacralizing the world and taking on what was once reserved for the gods, humanity may breed monsters. This is perhaps the inner meaning of the myth of the golem and of Frankenstein’s monster, modernized in the Terminator motif.
These monsters assumed human form in a trilogy of events in the 20th century, when Adolph Hitler, Josef Stalin and Mao Zedong slaughtered over 100 million people. Although genocides, dictators and the cult of personality were not new to the 20th century, and although the political ideas which allowed these dictators to attain power were not technological in nature, the technical ability to murder at scale was. Yet more important than the technological ability to kill innocents en masse was perhaps the exclusive appropriation of the god image by the human ego. This ego represented a conscious force that failed to take into account the massive power of repressed psychic content, both individually and collectively. When projected through mechanisms such as participation mystique, this shadow energy has power to do enormous damage.
An exploration and rediscovery of the ways of knowing which offset intellect is required to restore balance, or else collective shadow forces may again overwhelm humanity, this time swamping the planet as a whole. This consilience may lead to an exploration of forms of knowing which transcend intellect. Today, intellect has pushed all other methods of knowing into the periphery, creating what has been described by the writer, Ken Wilber, as flatland holism. The most vociferous proponents of the scientific method and of purely intellectual forms of understanding see other ways of knowing as marginal or perhaps irrelevant. They see their own methods as superior, and anything not provable experimentally is not regarded for serious consideration.
Some proponents of the scientific method may dispute this and argue that at least some nonscientific methods may be reliable and valid. Yet according to the experimental method itself, these other methods are not verifiable experimentally, so they cannot be considered as scientific evidence. The scientific method itself excludes nonexperimental ways of knowing as nonfalsifiable. These prescientific techniques are not subject to disproof, and therefore must be disregarded, at least according to the rigors of the experimental method itself. More broadly, being the wielder of the one way of knowing which involves proof, advocates of purely logical ways of apprehending argue that if a supposition is not confirmable by a logical proof, then that proposition cannot be logically true. The intellect discards ways of apprehension which are not experimentally reproducible or logically provable. Conclusions derived from these other ways of knowing are said to lack reliability; that is, repeatability of results. They are nonquantifiable, and as they cannot be measured, the conclusions they yield do not stand up to rigorous examination.
The scientific method has produced much progress, depending on one’s definition of progress. Yet the price of industrial and informational progress has been, as we have described repeatedly, enormous ecological imbalance, war, and social ills verging on the catastrophic. The natural world was subject to imbalances prior to the perceived supremacy of intellectual powers which introduced pastoral nomadic, then agrarian, then industrial, and now informational changes. Yet until the 19th century, the Gaian system was always able to compensate for these imbalances and re-attain homeostasis, even if some species were extirpated or even extinguished by humans. Some may argue that humanity is better off in its hyper-advanced, world-girdling civilization, since disease, much hunger and poverty have been reduced. Humanity may be temporarily better off, but clearly, the planet is not. And without a planet, there can be no humanity. As hunter-gatherers, pastoral nomads and early agrarians, humans lacked the capacity to unleash widescale damage on the Gaian system. Now, they possess this power, and are using it with alacrity.
We suggest that ways of knowing which extend beyond intellect, but which may compliment it, include intuition, instinctive, emotional, artistic, metaphysical and spiritual knowledge. With the possible exceptions of intuition and spiritual awareness, none of the other forms of knowing may be claimed or regarded as complete by its proponents or practitioners. These other ways of apprehension – which we have gathered under the term Holistic Intelligence, or HI – have at some time in our history been developed by humans. In some cases, these other ways of knowing and sensing have been developed by other Terran species. They existed prior to intellect and are in some sense a priori ways of sensing and understanding which, with the possible exceptions of artistic and spiritual bodies of knowledge, do not require overt learning. It can even be argued that artistic and spiritual knowledge are also implicit and do not require formal training. to acquire. All these ways of understanding can be regarded as dynamic, pre-intellectual, and often unconscious. Yet simply because they are unconscious does not mean their practitioners lack awareness when they practice them.
Intellectual capacities, on the other hand, do require formal learning processes. The scientific method, and to a large degree, mathematics and logic must be acquired, though they seem to correspond to innate centers within the cortical brain. They correlate highly with the cortical laminae in humans and correspond to conscious awareness.
What each of these other ways of understanding have in common is that they are implicit and arise from within the organism itself. In contrast, rational knowledge requires the impartation of information from the external environment or from others in that external environment into the mind of the learner. Intellectual learning may also be acquired directly from the environment through experience. As such, while the other ways of understanding are often a natural unfolding from within, intellectual understanding is different in that it is artificial, originates externally and is often imparted through formalized instruction. It is artificial in the sense that it usually comes from civilizing forces. On an abstract level, it is thus installed from culture and corresponds with the superego. It relies on instruction in separate disciplines. It is formalized in the sense that it relies on methodized instruction and systemized intake into the learner’s mind.
We suggest that intelligence is meant as an augmentative, rather than as an exclusive capacity to help organisms survive. In this auxiliary role, it is adaptive and serves the needs of humans and the Gaian system as a whole. We do not suggest that intellectual understanding be abandoned, but only that it be supplemented with more organic forms of knowing, as the intellect once was augmented in preindustrial societies through more organic knowledge of an instinctive, intuitive and traditional nature. Rather than supplementing BI with AI, perhaps BI and AI need be shaped by HI. These ideas of consilience have been discussed by scientists such as E.O. Wilson and thinkers such as Stephen Pearl Andrews, who integrated it in the context of his idea of universology, which joins all domains of knowing.
What can these other ways of understanding teach? What do they have to offer which can mediate the intellect’s domination of civilization, and civilization’s domination of the planet? What follows are properties which may both inform and temper the sometimes-tyrannical autarky of intellect:
