
Imagine a snake devouring its own tail. You have probably seen it before, especially if you are a tattoo artist, antiquities scholar, or just a fan of the Y2K-era X-Files spinoff, Millennium. This is an ancient symbol called the Ouroboros. It’s a striking image of something trapped in an endless loop, feeding on itself without ever moving forward.
A friend of mine recently tossed out a question on Facebook that got me thinking about this very idea: “If AI learns only from what we humans feed it, and we start leaning on AI to learn, how will we ever discover anything new? Isn’t it just a snake eating its tail?” I’m calling this the “Ouroboros Hypothesis,” and it’s a puzzle worth unraveling.
Artificial intelligence is everywhere—writing contracts, diagnosing diseases, even suggesting your next Netflix binge. But can it actually push human knowledge forward, or is it doomed to recycle what we already know? To find out, let’s dive into how AI works, what philosophers have to say about knowledge, and whether we’re setting ourselves up for a cosmic rerun—or something more exciting.
How AI Becomes a Brainiac
Picture AI as a super-smart librarian who’s read every book in existence (or at least a massive chunk of the internet). It doesn’t “think” like us—it sifts through mountains of data, spotting patterns we might miss. For example, in medicine, AI can analyze patient records and flag connections between symptoms and rare diseases, hinting at new treatments. Pretty cool, right?
But here’s the catch: AI’s brilliance is built entirely on what we’ve already written, said, or discovered. It’s not dreaming up alien languages or inventing physics from scratch. As researchers note in Integrating Machine Learning with Human Knowledge, AI uses our data to “reduce the amount needed for reliable outputs,” remixing it into something useful. So, while it can’t conjure totally new info, it can shuffle the deck in ways that surprise us—like a DJ dropping a fresh beat from old tracks.
The Ouroboros Hypothesis: A Loop of Doom?
My friend’s question cuts deep: if AI is just parroting our past, and we lean on it to learn, are we stuck in a loop? It’s a legit worry. Imagine AI churning out research papers based on old data, then training new AI on those papers, ad infinitum. It’s like a game of telephone where the message never changes—just gets fuzzier.
This is where the Ouroboros Hypothesis feels real. Without something to break the cycle, we might end up with a self-referential mess, reinforcing biases or stale ideas. A study, pessimistically titled, We Have No Satisfactory Social Epistemology of AI-Based Science, warns that unchecked AI could spiral into this trap, amplifying errors instead of sparking breakthroughs. It’s a chilling thought: a future where our machines—and us—keep chewing the same cud.
Humans to the Rescue: Breaking the Loop
But pump the brakes—there’s hope. AI isn’t flying solo. Humans are still in the cockpit, steering the ship. Think of AI as a brilliant assistant who hands us rough drafts; we’re the editors who polish them into gold. In drug discovery, AI might suggest a molecule worth testing, but it’s scientists in lab coats who run the experiments to prove it works (Reflections on Epistemological Aspects of AI during COVID-19).
That doesn't necessarily mean we are in the clear. Never forget how an Indian research team submitted a January 31st, 2020, pre-print research article that noted 4 insertions in the genetic sequencing data of SARS-CoV-N2 (Uncanny similarity of unique inserts in the 2019-nCoV spike protein to HIV-1 gp120 and Gag). Their research suggested that the virus was engineered in a laboratory, likely for gain of function research. They were lambasted, the conclusion was ridiculed, and aspersions of racism were cast (Scientists slam Indian study that fueled coronavirus rumors).
By February 3rd, the article was pulled from pre-print servers. Unless one wanted their scientific career to end abruptly, researchers were forced into a paradigm where the Lab Leak Hypothesis was simply untenable due to various social pressures. And the alternative didn't exactly help us find answers, either: “Maybe a bat flew into the cloaca of a turkey and then it sneezed into my chili and now we all have corona virus,” as Jon Stewart put it so aptly when he appeared on Colbert’s Late Night. In short, the Ouroborous Hypothesis may not be about AI’s capability to produce novel conclusions as much as human beings’ ability to embrace them should they prove unpopular.
Philosopher Karl Popper would love this setup. In his view, knowledge grows when we test ideas to see if they hold up (Karl Popper: Philosophy of Science). AI can toss out wild guesses, but we humans have to at least suspend disbelief long enough to test them rigorously, lest we eat our own epistemological tails. Only avoiding groupthink and human-machine teamwork keeps the snake from swallowing its tail—AI suggests, we verify, and together we inch forward.
The Power (and Peril) of a Good Question
Here’s a twist: AI’s output depends heavily on what we ask it. Feed it a lazy prompt or biased data, and you’ll get garbage back—think of it like a chef working with spoiled ingredients. I’d argue this mirrors science itself: bad questions lead to bad answers, whether you’re a researcher or a chatbot. What Can AI Learn from Human Intelligence? points out that AI thrives on quality inputs, and sloppy ones can derail it fast.
This ties back to my friend’s worry. If we’re careless with how we use AI—dumping in junk or asking dumb questions—we might indeed trap ourselves in that loop. But if we’re smart about it, crafting sharp prompts and feeding it diverse data, AI becomes a springboard, not a treadmill.

What the Big Thinkers Say
To dig deeper, let’s call in some heavy hitters from epistemology—the study of how we know stuff. In The Structure of Scientific Revolutions, Thomas Kuhn says knowledge leaps forward through “paradigm shifts”—game-changing moments often sparked by new tools. AI could be that tool, shaking up fields like medicine by spotting patterns we’d never see otherwise (Integrating Human Knowledge into AI). It’s not just recycling—it’s rewriting the playbook.
Meanwhile, Karl Popper (we’ve met him already) has a more contemporary take. Popper’s all about testing ideas to death. AI fits right in, churning out hypotheses for us to wrestle with (Induction, Popper, and Machine Learning). It’s less a loop, more a relay race—AI passes the baton, we run with it.
Donna Haraway is a touch wilder. In Simians, Cyborgs, and Women, Haraway sees us as “cyborgs,” blending with tech to create knowledge together. AI isn’t just a tool—it’s part of us, co-writing the human story. Sadly, Haraway does not mention one famous researcher very much predicted this outcome.
Marshall McLuhan will forever be one of my favorite authors. Medium is the Massage and The Global Village will forever linger in my mind. But another quote from a 1969 Playboy interview illustrates his startling prescience around the future of humanities’ relationship with computers:
Now man is beginning to wear his brain outside his skull and his nerves outside his skin; new technology breeds new man. A recent cartoon portrayed a little boy telling his nonplused mother: “I’m going to be a computer when I grow up.” Humor is often prophecy.
These thinkers suggest AI and the species responsible for it isn’t doomed to eat its tail. They do suggest we incorporate AI into ourselves while continuing to evolve, similar to how mitochondria are theorized to have taken on symbiotic relationships with larger eukaryotic cells (Lynn Margulis and the endosymbiont hypothesis: 50 years later). Effective mothers don't just leave mDNA in their children—they also nurture them and teach them how to dance. AI is already as much part of us as it is a partner in a bigger dance, as long as we show our LLMs how to do the scientific Charleston rather than leaving them to their own devices to learn the Stanky Leg [Brain] instead.

The Unexpected Twist: AI as a Game-Changer
Here’s something wild: AI might not just add to knowledge—it could transform how we make it. Like the telescope rewrote astronomy, AI’s knack for crunching data could flip entire fields upside down. In AI as an Epistemic Technology, researchers call it a “new way of knowing,” not just a fancy calculator. Picture it: AI spots a trend in climate science or epidemiology we’d have otherwise ignored, sparking a whole new science and changing the priorities for ‘evidence based’ research funding.
That’s not a loop—that’s a launchpad. But how do we keep from assuming that the metaphorical O-rings in the engines of the craft on the launchpad are nothing to worry about? And yes, that was a reference to the Challenger’s preventable tragedy. Repeating that history with our current technology could cost far more than seven lives to our in-group bias.

So, Where Does This Leave Us?
The Ouroboros Hypothesis isn’t wrong to raise an eyebrow. If we let AI run wild without oversight, we might indeed get stuck, recycling the same old stuff. But the evidence says we’re not there yet—and we don’t have to be. AI can push boundaries by remixing human knowledge into fresh insights, as long as we keep asking smart questions and double-checking its work.
It’s not a snake eating its tail—it’s a tag team cage match. AI digs up the raw material; we refine it into something new. The trick is staying sharp, feeding it quality data, and never outsourcing our curiosity. As Haraway might say, we’re cyborgs in this together—half human, half machine, and wholly capable of breaking the loop.
So, to my friend on Facebook: yes, AI starts with what we give it. But with a little human grit, it’s not just chewing the past—it’s cooking up the future.
What do you think—ready to join the dance?Got thoughts? Hilarious anecdotes from trying ChatGPT, Deepthink, or Grok? Drop a comment—I’d love to hear your take!
[Additional recognition and endless love to Ternion Sound and the amazing staff at Tribal Roots Collective for putting on a truly paradigm shifting event. Many of the ideas for this post occurred to me in their unique space designed to foster imagination and progress. I strongly encourage readers to visit this hidden gem of a warehouse venue in Wichita, KS or attend a festival where TRC is doing light and sound.]
Hi Charles
For some reason the ouroboros one spoke to me even more so I chose that one - I hope that’s ok
but it could also be that it’s 5am here and I’ve woken up with a bad cough and I just liked the image in that one more too.
Love the idea of the ouroboros and it got me thinking a lot about a new idea (also I liked the fact you used a non AI generated photo too. Entirely up to you whether you use AI generated photos but I suspect you might find you get better more likes if you stick to more of the original ones as lots of writers on here are engaged in copyright litigation due to companies AI taking their work without consent - see the Carole Cadwallader video TED talk post I shared on my feed.
AI and democracy is also a big theme - I’m doing a course on it atm actually - and I suspect you may know more about AI than many so I would definitely encourage you to write more on this if you are interested.
Lots of stuff
on AI and ethics and democracy being written about and I may write more once I’ve done the course.
Someone subscribed to me from an AI group and he has lots of subscribers - he may pick up this article from the RS, but have a look at my list and try getting on his notes if you like the subject.
I know he’s worried about the democracy issue due to black boxes etc but I think you could add to this group by discussing this idea too.
There is also the question of whether people exaggerate AI’s capacities (my programmer husband thinks many do ) and don’t realise how many mistakes they will make. A huge issue when it comes to surveillance and the legal system. There is also inbuilt discrimination in many of these too. So big problems around emphasising institutional racism. Something on how people are making AI more democratic and less discriminatory would be very useful and of interest to an AI group I think.
Anyway, do check out that AI group I mentioned - think he is called Michael - but he’s in my subscriber list and he has thousands of subscribers so hopefully this helps you