Slop

Artificial Intelligence has been billed as everything from the Almighty I to the Apocalyptic Instant. Maybe it’s neither. Perhaps AI is the Awfully Ineffective. Maybe it’s us. Some historical allusions.

Take a look at the internet and how that’s going. We’ve had the worldwide web now for something like 32 years. It’s devolved into a vast wasteland. That’s how FCC Chairman Newton Minow described commercial television in the early 60’s. The online world has degenerated into a cyberspace of clickbait and sensationalism, with everything from warnings of imminent volcanism and asteroid impacts to World War Z. Above all, it can’t seem to make up it’s mind about what is and what will be. POTUS is up. POTUS is down. China is ruined. China is taking over the world.

This bipolar pendulation should reveal what the internet really is: a reflection of our own, collective state of mind. It’s an echo chamber. And it’s reflective, reflexive nature has led to the cantonization of our society and accelerated the fracturing of world civilization as a whole. What could have been a uniting factor ended up dividing us as never before. Which brings us back to AI, and one of the primary vectors of its distribution, of the way we experience it: online.

I believe that AI content will deliver content, like our marketed, marketing culture, like the internet before it and like television before that, increasingly derivative, that’s banal and downright stupid. What’s my evidence?

Conventional search engines like Google have degraded in terms of quality. With SEO (Search Engine Optimization, for the uninitiated) and the increasing commercialization of online search results, searches are not only biased, they’re often useless. Try searching for azure and on page 1 results, you’ll get everything but the color, which is its original meaning. And that’s not an isolated incident. There’s a drift inherent in internet searches away from meaningful content and toward hocked commercial results. Even AI-augmented searches have gone down in quality.

AI is only as good as the data on which it’s trained. A recent article by Steven J. Vaughan-Nichols, Some signs of AI model collapse begin to reveal themselves, 5/27/25 that I picked up on The Register, https://theregister.com, which is written for IT professionals, notes this trend and makes some pretty bold (really, bland) predictions.

Vaughan-Nichols builds his premise that AI is turning to shit on the old premise: Garbage In-Garbage Out. It’s called AI model collapse. Here’s the problem: AI models are trained on their own outputs. Think about an organism that eats its own brains. When people eat human brains, they develop Kuru, a uniformly fatal disease known scientifically as spongiform encephalopathy. Through something called conformational influence, proteins called prions instruct other proteins in the brain to misfold, resulting in dementia. That’s the biological metaphor, yet the more accurate and elegant comparison may be a psychological one.

Many predicted that AI would soar to become a version of Big Brother, for better or for worse. We did the same thing with God, which became a mass projection of our own fears and neuroticism. This post doesn’t argue that God doesn’t exist. It just states the obvious: that some of our ideas about the deity are a kind of wish-fulfillment, as Freud argued. Behind all our mistaken notions and assumptions about God lie a collective series of projections. We do the same with our ideas about extraterrestrials. In AI, we project all our wishes, hopes and fears. Neither TV nor the internet ushered in a utopian millennium. But they didn’t kill us off either.

Getting back to AI collapse, Vaughan-Nichols writes that an AI which relies on its own outputs becomes less precise and more undependable. Cumulative errors build quickly in sequential iterations of AI models. Vaughan-Nichols then throws in a potent quote from a Nature article from 2024: ‘The model becomes poisoned with its own projection of reality.’ Nature, 2024.

In Fermi’s Paradox: An Inquiry into the Ends of Civilization, https://justnonfiction.com/fermis-paradox-intro/ available for free on this website, I’ve referred to something called ‘data poisoning’. This idea – that AI becomes infected by its own interpretation of what’s really out there – runs parallel to a similar psychological principle that’s been acknowledged for thousands of years. People see what they want to see. Confirmation bias. Conditioning. Projection. It has many names.

Why would we expect AI to be any different from its creators? Technology is simply an extension of biological evolution. In their book, Design in Nature: How the Constructal Law Governs Evolution in Biology, Physics, Technology, and Social Organization. Adrian Bejan and J. Peder Zane describe this principle in depth, and I discuss it in Fermi’s Paradox.

Let’s take a look at the mind of the creator of AI to see how AI mimics it. The human mind becomes ‘hypnotized’ by its own stories and beliefs about reality. It sees a mental model of what it believes to be ‘out there’ and projects it. Vaughan-Nichols refers to this tendency as resulting from error accumulation, in which each generative episode of AI learning ‘inherits’ and intensifies errors from prior evolutions of the AI model. Think of the way in which genes pass on errors which accumulate over time and conceive of informational evolution as an artificial extension of biology.

The degenerative nature of AI learning is reinforced by feedback loops which strengthen artificial learning habits that become more and more restrictive, recursive, predetermined and preconceived. Sound familiar? It’s Us. Confirmation bias allows us to preselect data from our environment which ‘proves’ our preexisting worldviews. Our narratives determine our facts. The French artist, Paul Cezanne, said: “My eyes are so stuck to the point of view that they are looking at that I believe they are going to bleed.” In a similar vein, German polymath, Karl Gauss, said: “I have had my results in for a long time. But I do not yet know how I am to arrive at them.”

Quoting AI company Aquant, Vaughan-Nichols concludes that when all AI has to consume are its own byproducts, it may move away from truth, from objectivity. I find this tendency to parallel the very human tendencies toward rationalization and justification, cognitive distortions in which we feed ourselves our own untruths in order to construct stories about ourselves, about others, and about the world.

Freud wrote about the ego defense mechanisms which were designed to protect our identities and to keep our sense of self intact. When our judgments (our stories) about ourselves become too harsh to contain within our own psyches, our psychological reactors melt down and we externalize, projecting these judgments onto other individuals, onto other groups, onto the world at large. We are, in a sense, feeding on our own outputs, on our own ideas, perceptions and judgments. We see superimposed upon the external world stratified layers of judgments which can prevent us from seeing things as they really are. In the same way, when Large Language Models are trained on their own byproducts, these outputs distort an AI’s version of reality. As a result, AI is no savior, as some would have us believe, just as it’s no malicious force which seeks the apocalyptic End. It’s simply a uroboric feedback loop doomed to swallow its own tail by training on its own outputs. So why do we see it, for good or for evil, as the ultimate Big Brother.

Freud believed that both God and Devil were projections. Extraterrestrials are similarly regarded as either saviors or conquerors. How we see God, how we perceive EBE’s, and how we conceive of AI perhaps tells us more about ourselves than it does about these myths and motifs, these angels and terminators of Jung’s archetypes and Fred Alan Wolf’s imaginal realm.

We’ve always been afraid of our own inventions, of our own technology. From the medieval myth of the golem to Frankenstein’s monster, from HAL in 2001: A Space Odyssey to the Terminator motif, we ascribe malevolent motives to our own creations. Dystopian science fiction is based on the premise of technology run amok. Yet I believe that AI itself is neutral. Despite all the horror hype about how lies or rewrites code to escape its own shutdowns or blackmails its inventors, it has no will, no drive to survive, no ego. To see it otherwise is simply a projection. It’s merely a mirror of the mind of its creators.

Oh, to be sure, it could cause damage, but not for some of the reasons we think. Like any technology, AI can be applied for ill-intent. It can be used for war, for greed, to intentionally spread disaster. We may become dependent upon it to yield results which it can’t give. We can allow our own abilities to atrophy as the result of overdependence on its tech, which already seems to be happening as we increasingly rely on this prosthetic mind and memory to augment our own. We can allow AI to perform warfare functions best left to the discretion of human judgment (nuclear chain of command anyone?). It may deliver false results, which it already has. These AI hallucinations are not uncommon. Since it learns and evolves at an accelerated rate, its errors are amplified very quickly. Therefore, overdependence on AI to perform life-sustaining functions may be unwise. And through what I call technological drift, AI may soon become unintelligible to us, as we become incomprehensible to it. This mutual unintelligibility may render AI useless, or even dangerous.

Yet, as far as regarding us as a threat to its own existence, I believe the threat is overstated, and may even be nonexistent. Humans stand alone in their narcissism and selfishness. Other biologically-derived organisms aren’t smart enough to be selfish, and they aren’t intelligent enough to cause the kinds of damage that we cause. AI may develop the same smarts we have at some point, but it doesn’t have an ego. As such, it can’t be self-centered or self-seeking.

I think that, just like the world, the promise of AI may not end with a bang so much as with a whimper. Vaughan-Nichols concludes that the day is fast coming when AI becomes useless. In other words, it’s really no better than its creators. If it’s fed slop, that’s what it will feed us. Take a look at the internet today. Most of what it ‘produces’ is garbage. Its algos often spit out useless and often inaccurate information. At this very moment, my MSN newsfeed is covering—

  • why so many girls are giving up cycling and what their parents need to know
  • Michelle Obama reacting to Malia Obama giving up her last name
  • the biggest mistakes people make when grilling hotdogs
  • the worst times of day to mow your lawn, and, drumroll please…
  • the truth about diners, drive-ins and dives.

Like television before it, the internet has been dumbed down. It feeds us back to ourselves. It tells us what we want to hear and it tries to sell us shit. Following in the vein of television, that’s what the internet does with increasing frequency. My guess is that at least to some extent, AI will follow suit. It will become an amplified echo of ourselves, a powerful house of mirrors, but a reflection of ourselves nonetheless.

Remember cable, before you cut the cord? It used to produce news and provide other quality information, but now, it’s turned into a 190-channel sewer of commercials, informercials, reruns, and opinion journalism. If you surf back-and-forth between Fox and MSNBC, you’d think you were living in two different countries, and you are. That’s what cable has done for us: divide. It’s trash, and those 190 channels don’t represent real choice. They represent the choice between what’s in my garbage can and what’s in yours.

When TV first came out, it experienced a short-lived Golden Age with shows like Playhouse 90 and writers like Rod Serling. This was soon followed by the sitcom with a laugh track.  There was a renaissance with cable when HBO and Showtime began producing their own content, but most of what you see on cable is shit. Late night, many channels are almost 50% commercials. And the internet is tracing a similar path.

By volume, emails are now mostly promotional junk. Websites are becoming a paywall hell. Social media is, if anything, even more divisive than cable opinion journalism, which dresses itself up as news. YouTube started out okay, but its algos make sure that what you see and hear is heavily filtered and biased and, unless you subscribe to content, it’s interrupted by maddening commercials about getting you that new generator or new windows. The day will soon arrive when you won’t be able to bypass that commercial content. Media always devolves down to the lowest denominator. To believe that AI and AI agents will be any different in the long run overlooks trends in the history of technology and of mass media.

Don’t believe the dystopian predictions. But don’t believe the hype either. It’s all always coming from us. No exceptions.

© 2025 by Michael C. Just