In a previous article entitled “Autophagy, when AI feeds on itself“, I explored how artificial intelligences gradually weaken by training on their own output. Unfortunately, the metaphor of the ouroboros, that serpent that devours its own tail, now also applies to human beings. This phenomenon, which we might call cognitive autophagy, threatens to profoundly and silently transform our relationship to knowledge, creating a cycle of intellectual depletion in which we are both victims and contributors.
The cycle of intellectual impoverishment
Imagine this scenario for a moment: you’re looking for information on a complex topic. Your search engine leads you to articles, most of which have been generated by AI. These articles, already diluted by the digital autophagy described earlier, become your main source of information. You absorb this watered-down knowledge and then share it, perhaps reformulated with the help of an AI. Others in turn draw from your content, thus perpetuating the cycle.
(excerpt from the article Autophagy: when AI feeds on itself): What we’re witnessing here closely resembles what Cerise and Ada observed in their lab: a gradual erosion of informational richness, like a painting that loses its colors with each reproduction. Only this time, it’s our collective cognitive heritage that is fading away.
The data is telling: according to a study conducted at Stanford University, students exposed primarily to AI-generated content develop a vocabulary that is 17% less rich and argumentative structures that are 23% less complex than their peers who rely on diverse, authentically human sources.
As neuroscientist David Eagleman explains, “The human brain is constantly reconfiguring itself based on the information it processes.” When that information comes mostly from AI-generated content, our brains adapt to its standardized structures and inherent limitations.
The neural pathways that allow us to appreciate nuance, ambiguity, and the complexity of the real world are at risk of gradually atrophying. Just as AI models develop “repetitive patterns in their outputs” as Ada noted, our own patterns of thought could become increasingly predictable and less creative.
Philosopher Bernard Stiegler already spoke of a “proletarianization of knowledge” to describe the progressive loss of our know-how and cultural literacy in favor of technical systems. Cognitive autophagy marks a new stage in this process: the proletarianization of our very ability to generate original thought.
Even more alarming is the impact on our critical thinking. As cognitive psychologist Elena Martinez points out, “The ability to assess the reliability of information erodes when one is constantly exposed to content of similar quality.” Faced with a sea of plausible yet shallow texts, we risk losing our ability to distinguish substance from surface, the grounded from the unfounded.
This dynamic echoes what we observe with digital echo chambers, where “78% of new content references other synthetic content rather than primary sources.” Our thinking risks operating in the same way, relying more and more on secondary and tertiary references, drifting further away from direct observation and genuine experience.
Social media, catalysts of collective cognitive autophagy
If cognitive autophagy already poses a significant risk on an individual level, social media platforms act as accelerators and amplifiers of this phenomenon, turning it into an unprecedented process of collective intellectual impoverishment.
These platforms, designed to maximize attention and engagement time, systematically favor content that triggers immediate emotional reactions over deep reflection. Recommendation algorithms, by analyzing our past behavior, lock us into what researcher Eli Pariser has called “filter bubbles“, personalized informational spaces that reinforce our existing preferences and opinions.
Engineer Guillaume Chaslot, a former developer at YouTube, revealed how the platform’s algorithms promote increasingly extreme or simplistic content to maintain user engagement. “The system learns that if you clicked on something slightly radical, you’re likely to click on something even more radical” he explains. This dynamic creates what sociologists call a “spiral of cognitive radicalization” where content becomes progressively more simplistic, emotional, and detached from the nuances of the real world.
Even more concerning, a 2023 study from New York University reveals that AI-generated content spreads 6.7 times faster on social media than content produced by humans, due to its ability to optimize for algorithmic engagement. This overrepresentation of synthetic content creates a toxic informational environment, where volume and virality consistently outweigh depth and relevance.
Neurologist Michel Desmurget refers to this as the “manufacturing of digital idiots” a provocative phrase that underscores the seriousness of the issue. His studies show an alarming correlation between time spent on social media and the erosion of sustained attention, deep reading, and critical analysis skills. The average attention span on a piece of content has dropped from 12 seconds in 2000 to less than 8 seconds today, a figure lower than that recorded for goldfish.
This process is self-reinforcing in a particularly insidious way: the more we consume superficial content on social media, the more our ability to engage with complex material deteriorates, pushing us toward ever more simplistic content in a downward spiral of intellectual decay. Neuroscientist André Nieoullon calls this a “short-circuited reward circuit” where the instant gratification provided by digital social interactions gradually overrides our natural desire for deep learning.
Recent data is alarming: 67% of teenagers report getting their news primarily from social media, according to the Pew Research Center. Yet, an analysis by the MIT Media Lab found that nearly 72% of so-called “informative” content circulating on these platforms contains excessive simplifications, factual errors, or significant decontextualizations. Repeated exposure to this impoverished type of information gradually reshapes our informational expectations and weakens our cognitive tolerance for complexity.
Philosopher Bernard Stiegler described this collective degradation of our relationship to symbols and meaning as a form of “symbolic misery.” Cognitive autophagy, catalyzed by social media, is driving us toward a new kind of intellectual proletarianization, where we progressively lose the ability to generate our own knowledge and become passive consumers of pre-digested information optimized for instant engagement rather than lasting intellectual enrichment.
Faced with this toxic dynamic, initiatives like the “Slow Media Movement” or “digital detox” practices are merely individual responses to a systemic problem. A true solution would require a profound overhaul of the attention economy that underpins today’s social media, as well as widespread education on the cognitive mechanisms exploited by these platforms.
The urgency of cultivating a diverse cognitive ecosystem
Make no mistake: the situation is urgent. Warning signs are multiplying, showing that cognitive autophagy is already underway. Studies conducted by the Berggruen Institute reveal that among 18 to 25 year-olds, daily time spent interacting with AI-generated content has increased by 67% in just two years. Even more concerning, according to neuroscientist Michael Merzenich, the neurological changes induced by these informational habits could become irreversible after only 4 to 6 years of intensive exposure.
We find ourselves in a situation comparable to the early decades of the industrial revolution, when pollution of natural ecosystems quietly took hold before we fully grasped its long-term harmful effects. The difference now is that it’s our cognitive ecosystem that’s at stake. And unlike the physical environment, the damage inflicted on our collective mental infrastructure could prove far more difficult to repair.
The pillars of a healthy cognitive ecology
In the face of this threat, we must build a cognitive ecology grounded in several essential pillars.
The first is conscious informational balance. Just as an “optimal ratio of 60% authentic human data to 40% generated content” is recommended for AI systems, we should aim to feed our minds primarily with primary sources, direct experiences, and genuine human exchanges. Neuropsychologist Jordan Grafman’s work suggests establishing a daily “attention budget” in which time devoted to AI-generated content is deliberately limited and balanced by more demanding cognitive activities.
The second pillar involves the creation of “cognitive corridors” that guarantee access to living, contextual knowledge. These protected spaces foster the circulation and exchange of rich, nuanced, and situated information. As sociologist Shoshana Zuboff explains, “in a world where information is abundant but often impoverished, scarcity shifts toward spaces of authentic learning.” Anthropologist Tim Ingold highlights the importance of such “transmission spaces” where knowledge is not merely transferred but truly embodied through shared practice.
The third pillar is the development of a diversified attentional ecology. Neuroscientist Maryanne Wolf distinguishes between “deep reading” which activates neural circuits linked to empathy, critical analysis, and synthesis, and “surface reading” which characterizes our digital consumption habits. Intentionally cultivating different modes of attention helps preserve the plasticity of our cognitive architecture in the face of increasingly homogenized informational practices.
A symbiosis rather than a substitution
In this cognitive garden that we must urgently tend, AI would act as an amplifier of our cognitive abilities, enabling us to explore new intellectual territories, while human experience would continue to anchor these explorations in reality, giving them both meaning and direction.
This symbiosis requires us to fundamentally rethink the relationship between humans and AI systems. Rather than being designed as substitutes for human intelligence, AIs should be developed as complementary tools that strengthen the very capacities we risk losing: critical thinking, divergent creativity, contextual evaluation, and ethical judgment.
A collective project in the face of a systemic challenge
These transformations cannot be viewed as mere individual responsibilities. As researcher Zeynep Tufekci points out, “we cannot expect individuals to solve, through sheer willpower, problems that are fundamentally structural.”
Cognitive autophagy is not a foregone conclusion, but time is not on our side. The longer we wait to act, the more our individual and collective minds will acclimate to the impoverished thinking patterns generated by self-referential systems. Each day that passes without awareness brings us closer to a point of no return, where our very ability to recognize the value of genuinely human thought may be compromised.
Preserving our cognitive diversity thus becomes a political project in the noblest sense of the word, a collective effort involving a deep transformation of our educational and informational practices. Schools should focus less on memorizing easily accessible facts and more on developing metacognitive skills. Media organizations should prioritize depth of analysis over mass production of standardized content.
Like a gardener who carefully selects seeds and preserves the biodiversity of their vegetable garden, we must actively cultivate the diversity of our knowledge sources, before informational monocultures take root for good. Reading foundational books, engaging in dialogue with experts, direct experimentation, attentive observation of the world around us, all these practices are no longer just advisable; they are vital to maintaining our intellectual autonomy.
Experiments conducted in certain Nordic countries offer promising avenues: integrating “attentional literacy” into school curricula from an early age, creating public digital spaces governed by principles other than engagement maximization, and providing institutional support to media outlets that favor depth over immediacy.
The time to act
By becoming aware, right now, of the mechanisms that threaten to impoverish our relationship with knowledge, and by adopting practices that encourage cognitive diversity, we can still turn the challenge of generative AI into an opportunity for intellectual enrichment. But such a transformation demands swift and determined mobilization, from educational institutions, media, governments, and citizens alike.
Because ultimately, the question is not whether AI will replace human intelligence, but rather what form of human intelligence will emerge from this co-evolution with artificial systems. And we have little time left to ensure that this intelligence remains deeply human: creative, critical, contextual, and self-aware.
The time to act is now, before our very ability to perceive the impoverishment of our thinking is itself eroded by this insidious process of cognitive autophagy.