On the delegation of reasoning and the disappearance of intellectual effort
Picture the scene. You’re behind the wheel in an unfamiliar city. Twenty years ago, a crumpled map lay on the passenger seat, eyes shuttling between paper and road, the mind charting streets, landmarks, intersections. Today, a soft, synthetic, almost reassuring voice takes care of everything. Attention drifts off, memory dissolves, and thought becomes a simple act of obedience. The comfort is total, almost anesthetic.
In the same way, when a question arises, there’s no need to search, to doubt, to build an answer step by step. You simply pose it to a machine. Within seconds, an artificial intelligence delivers a clear, well-argued, polished synthesis. The slowness of reasoning has vanished, replaced by immediate efficiency.
We now live in a world of universal assistance, a world where everything seems fluid, rational, effortless. These tools promise to free us from the burden of tedious tasks, to augment our capabilities, to make life simpler. Yet behind this technological comfort lurks a discreet paradox: as we delegate our cognitive faculties, we risk losing the ability to use them. By being constantly assisted, we cease to exercise what made us unique: autonomous thought.
It’s this slow drift—what I call cognitive laziness—that I’d like to discuss today. A laziness born not from disinterest, but from delegation. That of a mind that gradually surrenders to the machine to sort, choose, decide. The temptation of shortcuts becomes permanent, the fatigue of thinking disappears, and with it, the effort that forged lucidity.
What if… at the end of this comfortable path, we discovered that the price of progress is nothing other than the loss of what made us truly intelligent: the very faculty of thinking for ourselves?
The cognitive miser, or why our brain loves ease
To understand our eagerness to delegate thought, we must first observe the very nature of our mind. Contrary to the romantic notion of a brain hungry for reflection, our thinking organ is above all an energy manager, concerned with economy. In the 1980s, psychologists Susan Fiske and Shelley Taylor coined the term “cognitive miser” to describe this fundamental tendency: our mind seeks to do as little as possible [1]. Thinking at length, analyzing deeply, weighing pros and cons are costly activities. Faced with a problem, we don’t spontaneously choose the most rigorous path, but the simplest one, the one that consumes the least mental resources.
This principle of economy was beautifully illuminated by Daniel Kahneman, psychologist and Nobel laureate in economics, in his seminal work: Thinking, Fast and Slow [2]. He describes the coexistence of two modes of thought that alternate and sometimes clash within us.
- System 1 acts like autopilot: fast, intuitive, instinctive, emotional. It’s what makes us recognize a familiar face, understand a sentence on the fly, or react without thinking when a child suddenly crosses the road. This system operates using heuristics, those mental shortcuts that allow us to decide quickly and often well, but sometimes poorly.
- System 2, on the other hand, resembles manual control. It’s slow, deliberate, logical. It activates when we need to solve a difficult problem, complete a tax return, or learn a new skill. Its activation requires attention and conscious energy expenditure: it’s the domain of deliberate reasoning, critical thinking, intellectual vigilance. Kahneman even notes a fascinating physical sign of this shift to System 2: pupil dilation, as if the mind, in concentrating, opens its eyes wider to the world.
To illustrate the tension between these two modes of thinking, Kahneman proposes a little puzzle that has become famous: the bat and ball problem [3]. “A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?” Most people answer immediately: “10 cents.” This response springs forth effortlessly, with deceptive confidence: it’s System 1 at work. Yet it’s wrong. If the ball cost 10 cents, the bat would cost $1.10, and the total would be $1.20. The correct answer is 5 cents for the ball, $1.05 for the bat. To arrive at this, you must activate System 2, slow down, set up the equation, and reason it through. This small exercise reveals how much our automatic thinking dominates: we often validate the first plausible idea that crosses our mind, without subjecting it to the critical examination that System 2 would require.
Cognitive laziness is nothing other than the takeover of System 1 over System 2. It’s our natural inclination to rely on quick judgments, to accept plausible answers rather than true ones, to follow the first path that comes along without verifying its solidity. In ancient societies, where dangers demanded immediate reactions, this mental economy was a survival asset. But in a world saturated with information, where decisions must be thoughtful rather than instinctive, this old reflex becomes a handicap. Our digital tools, by offering us instant answers, amplify this tendency: they constantly short-circuit System 2. Why mobilize your memory, your reasoning, or your critical mind when a machine can do it for us, faster and effortlessly?
The era of cognitive offloading and its historical precedents
Entrusting part of our mental efforts to external supports is nothing new. Humanity has always sought to lighten the burden of memory and reflection. Writing itself was a first form of offloading: it transferred working memory from the mind onto the surface of tablets, parchments, then screens. Socrates, in the Phaedrus, already feared that this invention would weaken human memory, making people dependent on signs rather than their own recollection. But what we’re experiencing today operates on a different scale. The digital age has introduced cognitive offloading of a new kind: immediate, massive, permanent [4]. We no longer merely inscribe our thoughts in matter; we delegate them to systems that think in our place, continuously.
History offers several revealing precedents. The introduction of the calculator in the 1970s, for instance, triggered heated debates similar to those provoked by artificial intelligence today. Some saw it as a threat to mental calculation abilities, others as an instrument of intellectual emancipation allowing focus on understanding rather than mechanics [5]. The debate was so intense that several American states considered banning it from schools. In retrospect, we know the calculator didn’t make students less competent, but it transformed what we teach and what we assess. Exams shifted toward logic, abstraction, modeling. The tool modified the very nature of effort, not its existence.
But artificial intelligence introduces a shift of another order. It’s no longer a calculation we’re entrusting to the machine, it’s an entire part of our reasoning. It’s no longer operations we delegate, but questions we stop asking ourselves. By constantly asking it what we could search for ourselves, we lose the taste for doubt, the joy of digging deeper, the desire to understand.
The emergence of GPS has also offered a fascinating testing ground for observing the effects of cognitive offloading, this time applied to navigation. A 2020 study published in Scientific Reports showed that regular GPS use alters autonomous spatial memory [6]. Researchers Louisa Dahmani and Véronique Bohbot followed fifty drivers accustomed to being guided by their device and assessed their abilities to memorize a route without assistance. Result: the longer the GPS use, the more spatial memory had weakened. Three years later, in a follow-up, the trend had confirmed itself: those who continued to rely on GPS showed a marked decline in hippocampus-dependent memory, a key brain region for navigation and recollection.
This phenomenon isn’t explained by a natural inability to orient oneself: it’s the prolonged use of assistance that causes regression. When we externalize our mental map onto a screen, the associated skill atrophies. The hippocampus transforms according to its use: in London taxi drivers, who must memorize the maze of streets, it’s more developed than average; in those who systematically rely on GPS, it shrinks.
These examples remind us of an essential lesson: comfort has a cognitive price. Each time we externalize a mental function, we stop exercising the neural circuits that support it. The brain, far from being fixed, constantly reconfigures itself according to how we use it. This is the principle of neuroplasticity: solicited connections strengthen, neglected ones fade. The adage “Use it or lose it” has never been more accurate [7]. Recent neuroscience confirms that intellectual effort and active learning are essential for maintaining brain vitality, especially with age. To stop thinking for ourselves isn’t merely to remain at the same level: it’s to regress.
The quiet abdication of critical thinking
Today’s artificial intelligence goes much further than GPS. It doesn’t just spare us the effort of remembering a route; it can write, analyze, synthesize, argue. It masters language, that uniquely human domain par excellence. When we ask it a question, it doesn’t simply point to a source, it produces a response—polished, structured, convincing. And this is where a more insidious danger begins: that of intellectual passivity.
A study by the consulting firm Gartner, published in 2024, revealed a troubling finding: 77% of professionals regularly using AI admit to delegating reasoning tasks without verifying the results [8]. This figure might seem surprising, but it actually illustrates a very human mechanism: automation bias, our tendency to blindly trust machine responses. Once the machine has spoken, the critical mind disarms itself. We accept the answer without questioning it, without retracing the reasoning, without verifying the sources. The AI becomes an oracle that is never doubted.
This trust is reinforced by a crucial characteristic: the fluency of generated responses. Modern AI produces impeccably constructed, grammatically flawless, elegantly articulated texts. This formal perfection creates an illusion of reliability. Our mind confuses verbal ease with intellectual rigor. Yet a response can be perfectly formulated and entirely false. This phenomenon, known as fluency bias, has been studied by psychologist Adam Alter [9]: we judge content as more credible when it’s presented clearly and smoothly, even when its substance is questionable.
Add to this another effect: the illusion of understanding. By reading a detailed explanation produced by AI, we have the impression of having grasped the subject. But true understanding doesn’t come from passive reading; it’s built through active engagement with concepts, through the effort to formulate questions, to test hypotheses, to stumble and get back up. Outsourcing this effort, we retain only a superficial, fragile understanding that evaporates at the first challenge.
What we’re witnessing, then, is the quiet abdication of critical thinking. We no longer ask: “Is this true?” but “Does this seem plausible?” We no longer verify, we accept. We no longer think, we consume. And this consumption is all the more dangerous as it appears efficient. The answer arrived quickly, it’s well-written, it seems solid. Why waste time doubting it?
The danger is twofold. First, we expose ourselves to misinformation. AI can produce convincing errors, invent citations, distort facts while maintaining an appearance of authority. But beyond factual errors lies a more fundamental threat: the atrophy of critical vigilance itself. By ceasing to exercise doubt, to verify, to question, we lose not only the ability to detect errors but the very reflex to look for them. Critical thinking, like a muscle, weakens when not used. And once lost, it’s terribly difficult to rebuild.
The metamorphosis of thinking: when the mind delegates its own authority
But the problem doesn’t stop at passivity. AI also transforms the very nature of our relationship with thought. It no longer simply helps us think better; it increasingly thinks in our place. And this substitution, though comfortable, carries a risk whose depth we’re only beginning to glimpse: the loss of intellectual autonomy.
The philosopher Martin Heidegger, in his famous 1954 lecture What Is Called Thinking?, warned us of a danger he sensed looming [10]. For him, modern technology threatened to distance us from what he called “meditative thought”—that slow, patient, personal form of reflection that doesn’t seek efficiency but understanding, that doesn’t rush to conclude but takes time to inhabit a question. Heidegger distinguished this authentic thinking from “calculative thought,” which dominates technique and seeks above all to optimize, measure, control. In his view, we were already, in the industrial age, tending to confuse thought with calculation.
Today, this confusion has reached a new threshold. With AI, it’s no longer just a matter of calculating faster: it’s about delegating to the machine the entire cognitive chain. From question to answer, from problem to solution, from vague idea to structured argument, the machine takes charge of everything. And in this delegation lies a subtle danger: we stop inhabiting our own questions. We no longer take the time to let an idea mature within us, to feel its contours, to turn it over in all directions before resolving it. We ask the question and immediately receive the answer. The space of gestation, that uncertain interval where understanding forms, collapses.
This collapse has profound consequences. For it’s in that space—in the time we spend grappling with a problem, in the effort to formulate an answer ourselves—that something essential happens: we become the subject of our thought. We don’t merely receive knowledge; we construct it, we appropriate it, we make it ours. This active engagement is what distinguishes genuine understanding from simple information storage. When we delegate this work, we remain spectators of our own intelligence. We know the answer, but we haven’t thought it through. We hold the solution without having traveled the path that leads to it.
Ethan Mollick, a researcher at the Wharton School, uses a striking metaphor to describe this transformation [11]. He speaks of AI as a “cognitive prosthesis.” Like a prosthesis that replaces a lost limb, AI substitutes for a mental function we’re no longer exercising. The comparison is illuminating, but also troubling. Because unlike a prosthesis that compensates for an absence, AI risks creating the absence it’s supposed to fill. By delegating our thinking to it, we don’t merely spare ourselves effort; we lose the ability to make that effort. The prosthesis doesn’t compensate for a disability; it produces it.
And this production occurs silently, almost imperceptibly. We don’t realize we’re losing a skill until the moment we need it and discover it’s no longer there. It’s like someone who would always take the elevator and, one day faced with stairs, realizes they can no longer climb them. The cognitive muscle has atrophied through lack of use.
What’s at stake, then, is nothing less than our intellectual autonomy. By letting the machine think in our place, we risk becoming mere validators of its outputs. We no longer initiate the thought process; we ratify it. We no longer decide what to think, but whether to accept what we’re told. This shift, though subtle, is decisive. For autonomy isn’t just the ability to think correctly; it’s the ability to think by oneself, to be the source of one’s own judgments, to take responsibility for one’s reasoning.
Heidegger saw in technological thinking the risk of a humanity alienated from its own essence. But what he couldn’t foresee was the extent to which this alienation could become voluntary, comfortable, almost imperceptible. We don’t experience delegation as dispossession, but as relief. We don’t feel impoverished, we feel augmented. Yet the question remains: augmented in what? In speed, certainly. In efficiency, undoubtedly. But in depth, in lucidity, in understanding? The answer is less certain.
The courage to think, still and always
And yet, despite all this, another path remains open. We’re not condemned to cognitive decline, nor to intellectual passivity. We can choose to resist, not by rejecting technology, but by using it differently. By making it a tool rather than a crutch. By preserving the space where thought unfolds, slowly, patiently, in that encounter with complexity that machines cannot replace.
What if thinking were our last act of freedom? In a world saturated with immediate answers, choosing the slowness of questioning becomes an act of resistance. To think is to pause, to doubt, to dig deeper. It’s sometimes to get lost, but always to return. It’s to refuse overly convenient certainties and rediscover the pleasure of intellectual wandering, even in the fog.
To preserve this inner freedom, we must relearn to cultivate active thinking, both rigorous and alive. Here are some concrete paths to achieve this.
1. Practice metacognition: observe your own thought. Before opening a search engine or querying an AI, take a moment to ask yourself: how would I approach this problem on my own? What skill am I about to delegate? This small pause restores the mind’s role as conductor. Metacognition (the ability to think about one’s thinking) acts as a safeguard against automatic drift. It brings us back from reflex mode to conscious mode, where true learning takes place.
2. Reintroduce friction: write first, ask later. Digital tools are designed to eliminate effort. Sometimes we must recreate it. Philosopher Pia Lauritzen advises in Forbes: “Write first, ask later” [10]. Write a first version, even if clumsy, before soliciting AI. Then use it to critique, correct, enrich. This reversal transforms the machine into an interlocutor rather than a crutch.
3. Distinguish the urgent from the profound. Following Martin Heidegger’s thinking, we must learn to recognize what calls for thinking from what merely distracts [10]. The digital world bombards us with urgencies. But fertile thought emerges from slow questions, those that can’t be solved with a click. Reserve your strength for those problems, the ones that form you more than they reassure you.
4. Cultivate slow reading and active memory. Reading a dense text without being interrupted by a notification has become an act of resistance. Resuming this ritual means rebuilding the continuity of attention. Take notes, reformulate, try to remember a passage or a poem. Memory, often scorned in the Google age, isn’t mere storage: it’s an invisible architecture that connects our knowledge, nourishes intuition, and prepares creativity.
5. Use AI as a tutor, not as an oracle. Instead of asking it for the answer, ask it to make you think. Have it play Socrates’ role: let it question, interrogate, contradict. “Explain to me,” “Give me a quiz,” “Show me where my reasoning goes wrong.” This dialogue maintains intellectual effort and transforms the machine into a critical mirror rather than a substitute for thought.
6. Practice cognitive fasts. Reserve moments without assistance: a day without a search engine, an evening without AI, a walk without GPS. Working by hand, solving things yourself—this reactivates dormant neural pathways. Just as the body regains its tone through exercise, the mind regains its sharpness through the absence of prosthetics.
7. Choose human partners for thinking. As Pia Lauritzen reminds us [10], AI cannot be a true companion in reflection because it has neither desire, nor stakes, nor vulnerability. It never pays the price of an error. Only other humans share with us the risk of reality: the fear of being wrong, the joy of understanding, the strength of disagreement. Seek partners with whom to debate, contradict, doubt. These exchanges, sometimes demanding, sustain living thought—the kind that breathes, hesitates, and ultimately rises.
8. Rehabilitate doubt and discomfort. To think is to accept feeling unstable, sometimes exposed. It’s daring to say “I don’t know.” It’s in this imbalance that understanding is born. AI offers us the tranquility of the answer; thought, however, demands the trouble of the question. Recovering this trouble means restoring dignity to the mind.
For it’s in this silent struggle against ease that the very future of our intelligence is at stake. Effort isn’t a relic of the past, but the condition of all lucidity to come.
Ultimately, cognitive laziness isn’t a failure but a natural slope that our tools make more slippery than ever. Technology doesn’t dull us by itself; it’s our surrender that authorizes it. Climbing back up this slope requires conscious vigilance, a voluntary effort to rediscover the taste for intellectual slowness. The challenge isn’t to triumph over the machine, but to preserve what in us still does the work of thought: the capacity to experience, to doubt, to understand for ourselves.
Intellectual effort isn’t a useless constraint; it’s the very process by which we become lucid. Ease brings speed; only slowness forges depth. It’s in the resistance of reality, in the struggle with a problem, that our discernment is refined. Richard Feynman expressed it with disarming clarity: “What I cannot create, I do not understand.” In the age of universal assistance, this phrase rings like a call to order. If we let machines create in our place, we will gradually cease to understand. And when understanding disappears, it’s our humanity itself that fades away.
The true danger, then, isn’t that AI thinks for us, but that it spares us the courage to think. To put humanity back at the center means restoring to intelligence what it has that’s most noble: the slowness of discernment, the joy of understanding, and the vertigo of doubt.
For this, courage will be needed. The courage to think, still and always.
References
For meticulous minds, lovers of numbers and sleepless nights verifying sources, here are the links that nourished this article. They remind us of one simple thing: information still exists, provided one takes the time to read it, compare it, and understand it. But in the near future, this simple gesture may become a luxury, because as texts generated entirely by AI multiply, the real risk is no longer disinformation, but the dilution of reality in an ocean of merely plausible content.
[1] Fiske, S. T., & Taylor, S. E. (1984). Social Cognition. Addison-Wesley.
[2] Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
[3] Hoover, J. D., & Healy, A. F. (2019). The Bat-and-Ball Problem: Stronger evidence in support of a faulty heuristic account. Cognitive Psychology, 113, 101224.
[4] Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6
[5] Banks, S. A. (2011). A historical analysis of attitudes toward the use of calculators in American junior high and high school math classrooms since 1975. Cedarville University. https://files.eric.ed.gov/fulltext/ED525547.pdf
[6] Dahmani, L., & Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10(1), 6310. https://www.nature.com/articles/s41598-020-62877-0
[7] Shors, T. J. (2011). Use it or lose it: How neurogenesis keeps the brain fit for the challenge. Behavioral and Brain Sciences, 34(3), 148-149.
[8] Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., … & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint arXiv:2506.08872. https://www.media.mit.edu/publications/your-brain-on-chatgpt/
[9] Kovanovic, V., & Marrone, R. (2025, June 23). MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated. The Conversation. https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450
[10] Lauritzen, P. (2025, February 22). 3 Tips To Improve Your Critical Thinking Skills In The Age Of AI. Forbes. https://www.forbes.com/sites/pialauritzen/2025/02/22/3-tips-to-improve-your-critical-thinking-skills-in-the-age-of-ai/
