Not long ago, I raised an early warning sign: the danger of cognitive autophagy, that moment when AI begins feeding on its own output, endlessly recycling the same ideas to the point of draining their diversity.
👉 Cognitive autophagy, when humans feed on impoverished content! : https://www.linkedin.com/pulse/cognitive-autophagy-when-humans-feed-impoverished-content-buschini-d1oje
and
👉 Autophagy, when AI feeds on itself : https://www.linkedin.com/pulse/ai-autophagy-when-feeds-itself-philippe-buschini-sydee
But there’s another threat, perhaps even deeper…
Assisted intelligence, forgotten thought
It’s become a reflex. Open a tab, ask a question, get an answer. Instantly. Accurately. Often better worded than we could have done ourselves. ChatGPT, Copilot, Gemini, and others have become our new intellectual go-to tools. We consult, copy, paste. And the effort to understand? It’s optional now.
But by constantly outsourcing what we once called “thinking,” are we not forgetting both its flavor and its necessity? Do we still think because we need to, or only when the machine is offline?
A study from Microsoft Research and Carnegie Mellon issues a warning: across over 900 tasks completed by 319 professionals, the more users trust AI, the less they question, verify, or engage critical thinking. This isn’t science fiction. It’s a trend.
And it’s not an isolated one. It aligns with a broader societal drift: intelligence is being externalized. What we once considered personal effort, close reading, rigorous analysis, long-term memory, cultivated intuition, is now being outsourced to technical systems. We are shifting from internalizing knowledge to optimizing tasks. Intelligence is no longer a human journey, but a service. A plugin.
This shift extends beyond work and academia. We now ask AI what we once asked a friend, a teacher, a parent. Technology becomes advisor, coach, partner. It replies quickly, without judgment, with well-formed phrases. And gradually, we stop turning to ourselves or others for answers.
This slide is all the more insidious because it wears the mask of efficiency. But in reality, it’s deeply reshaping our relationship with knowledge, learning, and memory itself. If everything is accessible, why remember? If everything can be generated, why create? If everything is already written, why bother writing?
These aren’t rhetorical questions. They’re the seeds of a new cognitive culture, where human intelligence, instead of expanding, merely reacts. An intelligence of consumption, validation, dependence. And in this emerging landscape, autonomous thought could become an endangered species. Our ability to process information is being outpaced by systems that summarize, translate, rephrase, and predict. Slowly, the habit sets in: why strain to reason, when a machine can do it all?
Thinking for yourself: a skill that must be earned
But what does it really mean to “think for oneself”? That elusive autonomous thinking so often praised by educators, psychologists, philosophers?
According to the Hop’Toys blog, an intellectually autonomous person doesn’t just absorb ideas, they filter them, question them, digest them. They “form their own judgments” and “initiate personal reflection.” They don’t just have opinions, they know where those opinions come from, how they were formed, and why they matter.
The site Écoute-Psy highlights how this autonomy fuels well-being: it brings a sense of freedom, alignment, and responsibility. It makes us feel truly alive. It’s a silent foundation for self-esteem.
Le Robert dictionary defines autonomous thinking as “critical, engaged, participative.” Three key words. Because autonomous thought is not received, it’s constructed, often in discomfort, frequently in opposition to obvious truths. It takes time, memory, doubt. In short, it is not economical.
This kind of thinking involves subterranean work: distinguishing what seems true from what withstands scrutiny. Not mistaking smooth discourse for validity. Resisting the authority of form. It takes effort, attention, and sometimes even courage.
And this is precisely why it clashes with the promise of speed and ease that artificial assistants offer. By outsourcing our mental processes, we risk not just forgetting how to think, but losing the will to do it. Ease becomes suspicious of effort. Reflection becomes burdensome.
The temptation of cognitive delegation
The term may sound cold, but it’s accurate: “cognitive delegation.” It’s the act of handing over not just searches or calculations, but reasoning itself, judgment calls, synthesis.
Psychologist Michael Gerlich, quoted in Bilan, puts it plainly: “I used to transfer information elsewhere. Now, technology tells me: ‘I can think for you.’” And this promise, as seductive as it is perilous, meets us right where we’re tired. Because thinking is hard. And AI serves it up pre-cooked.
The study by Lee et al. confirms: the more professionals trust AI, the less they engage their critical thinking. And those who still hesitate? They keep checking, rephrasing, comparing. They resist. But for how long?
And more importantly, in what context? When professional environments value speed, productivity, and compliance with standards, autonomous thought becomes an invisible luxury. It slows things down, questions, disrupts. In short, it’s inconvenient.
Symptoms of weakened thought
Signs are already visible. A developer halts work because their AI assistant is down. A manager turns to ChatGPT for an ethical decision. A student writes an entire paper without forming a single idea themselves. These aren’t exceptions. They’re the new normal: thought becomes secondary, the tool takes center stage.
And this trend is affecting the young. One in four American teenagers already uses ChatGPT for homework, not to double-check, but to avoid thinking altogether. To bypass the effort of understanding, not to support it.
It’s a silent reversal: where we once learned to ask questions, build reasoning, defend a viewpoint, we now learn how to prompt an AI. Form replaces process. Content doesn’t matter as long as the output looks good.
This isn’t just outsourcing. It’s unlearning.
A sterile convergence
A subtler side effect is creeping in: answer standardization. Lee et al.’s study speaks of “mechanized convergence.” Different users, querying the same AIs, get similar answers. The style may vary, but the substance repeats. Preformatted thought structures are emerging, making intellectual diversity a fading memory. The danger? The flattening of debate, the loss of detours, the erosion of original ideas.
Critical thinking thrives on friction. On divergence. On the possibility of an alternative. It feeds on tension between perspectives, on the discomfort of disagreement. If all paths lead to the same answers, why think at all? If responses come pre-written, what’s left to question?
In a democratic society, this matters. Uniformity in what can be thought invites gentle manipulation. Ready-made truths. Pre-chewed reasoning. Diversity of opinion becomes a statistical glitch, an anomaly in a system designed for fluidity and repeatability.
This cognitive homogenization also affects learning. Students, using the same AIs, receive similar content, standardized examples, convergent analysis. Originality becomes rare, and with it, critical thinking fades.
This convergence doesn’t just endanger public debate. It reaches creativity, innovation, education. If all texts start to look the same, if all presentations follow the same logic, what remains of the unexpected? Of surprise, of the offbeat, of the unplanned? Even literature, art, cinema may end up conforming to the same narrative molds, shaped by statistical models.
Eventually, our collective ability to invent, disrupt, transform could weaken. Innovation isn’t born of repetition, it springs from friction between ideas. And if AI smooths over those frictions, it could dull human intelligence into blandness.
Slowing down to resist
So what can we do? Should we unplug everything? Burn the algorithms? No. But we must learn to slow down. To check. To doubt. We need to see reflection time as productive, not as a delay. In a world that prizes fast answers, slowness becomes resistance. A vital breath.
AI isn’t the enemy. The real danger lies in our surrender. The ease with which we prefer convenience over understanding. The belief that agreeing is the same as thinking. The quiet conviction that uncertainty is a flaw, and every question deserves an instant answer.
Some solutions already exist: interfaces that encourage reflection, delayed suggestions, built-in “cognitive friction.” These are technical gestures, but they are ethical ones too. We need digital environments that invite doubt, questioning, contradiction. This is tomorrow’s fundamental design challenge.
But that’s not enough. We also need demanding learning spaces where intellectual effort is valued. Reading difficult texts, debating with opposition, learning to argue without validation, without shortcuts. That’s where resilient thought is forged. Thought that seeks not to shine, but to understand. Thought that distrusts easy consensus and dares to ask the uncomfortable questions.
We need to relearn handwriting, to wander in books, to not understand on the first try. Human intelligence isn’t linear, nor always efficient. It’s often messy, slow, and that’s its richness. It’s made of detours, false starts, surprises. It doesn’t flourish in speed, but in depth.
And that depth, we must reclaim. Not as a privilege for the few, but as a shared right: the right to think freely, imperfectly, intensely. The right to not know, to search, to try again. Maybe that’s the heart of it: AI gives answers, but it’s in the search that we become intelligent.
A matter of intellectual survival
In the end, the question isn’t whether AI is thinking for us. It’s whether we still want to think. Think actively, freely, slowly. Not because a task demands it, but because something inside us insists.
And if we do, we’ll have to tend that flame, feed it, shield it, practice it. Autonomous thinking isn’t declared. It’s trained. Forged in challenge, doubt, confrontation. Rooted in lived experience, deep reading, heated debate, sleepless nights searching for meaning where there may be none.
Because thinking is an act. A construction. A sometimes harsh joy, but never a useless one. It’s what makes us free subjects, responsible, present to ourselves. It’s what lets us not just react, but choose, nuance, build.
And if we wait too long, we may lose not only the ability to think, but even the desire to. And that would be a far greater loss than any system crash. Because it is through thought that we remain human. That we remain singular. That we become, sometimes, poetically defiant.
And if we add to this the phenomenon of autophagy, that closed loop where AI recycles its own output until meaning is bled dry, then the cocktail turns lethal. At least for our cognitive faculties. For an AI spinning in circles and humans who no longer wish to think is the death of living, unpredictable, embodied thought.
So let us ask the question plainly: in this world of instant answers, what are we still willing to devote time, attention, reflection to? Can we still claim the effort to understand as a form of freedom?
The answer to that may well shape the future of our inner liberty.
And perhaps, the future of our humanity itself.
References
- Lee, H. P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025) : “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.”
- Le Dauphiné Libéré (3 mars 2025) : “L’intelligence artificielle est-elle un risque pour l’esprit critique ?”
- Turrettini, E. (26 février 2025), Bilan : “La dépendance à l’IA menace-t-elle notre pensée critique ?”
- Hop’Toys (15 février 2019) : “Autonomie intellectuelle : la soutenir en famille”
- Écoute-Psy (18 mars 2025) : “Comment penser et agir seul : stratégies pour l’autonomie”
- Dictionnaire Le Robert : Définition de la pensée autonome