It speaks well… but it doesn’t think
We talk a lot about artificial intelligence. Some see it as a technical miracle. Others, as a threat to our civilization. But what if, above all, it were a misunderstanding?
It speaks well. It writes flawlessly. It answers quickly. It impresses… But it DOES NOT THINK!
AI doesn’t exist the way we imagine it. No more than a calculator is a mathematician. What we call artificial intelligence today is only an illusion of thought. A statistical mirror. A trained parrot. It computes, but it doesn’t understand. It predicts, but it doesn’t know. It produces language, but has never felt the dizzying birth of an idea. It answers, but never questions its own reasoning. It strings words together without weighing their meaning or sensing their existential weight.
It is gifted, yes, but empty. Empty of intuition, of contradiction, of living memory in the human sense. It has no childhood, no wounds, no waiting. It has never waited for an answer that wouldn’t come. Never felt the imbalance a real question imposes. Never doubted itself.
It is a machine for guessing the next word.
The term “artificial intelligence” was coined in 1956 at an academic conference by John McCarthy. It carried a bold promise: to simulate, artificially, certain human faculties. To reason, to learn, to adapt… without consciousness, without flesh, but with logic. A thrilling ambition for its time.
But what the term meant then and what it encompasses today have very little in common. So why has this misunderstanding persisted? Perhaps because it reassures. Because it sells. Because it makes us believe the mind can be automated. And most of all, because the term itself is a decoy.
What some call AI, I prefer to call “Augmented Intelligence”. Not to deny its power, but to restore it to its rightful place: an extension of certain human functions, not a substitute for our thinking. Also because we are tired of thinking, tired of hesitating, and that fatigue opens the door to machines that “act as if.” The illusion of intelligence becomes a cognitive comfort, a silent outsourcing of our most intimate power: the ability to understand by ourselves.
But this illusion of thought, however seductive, has profound consequences. Because if we start believing the machine thinks, we will eventually believe we no longer need to think for ourselves. A slow, imperceptible slide, but very real.
AI relieves us… from the burden of thinking
We are tired. Overstimulated. Constantly caught in streams of information, notifications, rapid decisions. Our attention is fragmented, our focus under siege. So when a machine offers a fast, clear, convincing answer, we settle. We breathe. We delegate. Gradually, the effort fades. Doubt becomes uncomfortable. Questioning feels pointless. The mind gets used to not seeking.
When calculators entered schools, people wondered whether students would lose interest in mental arithmetic. And indeed, that skill declined. But in return, we learned to reason differently. We shifted the effort. We reinforced other skills: logic, abstraction, modeling.
With AI, the risk is of a different kind. It’s no longer a calculation we delegate, but part of our reasoning. It’s no longer a task we externalize, but a question we stop asking. And as we ask the machine what we could search for ourselves, we lose the habit of doubting. The pleasure of digging. The drive to understand.
Little by little, what we call knowledge becomes an instant synthesis. A ready-made answer, delivered effortlessly. A truth presented as self-evident. And this is where relief becomes a trap: because thinking, truly thinking, remains an uncomfortable act. Demanding. Sometimes painful. But that discomfort is the price of clarity. And when AI frees us from that trial, it doesn’t make us freer. It makes us less present. Less involved. Less in command of ourselves.
And this loss of effort doesn’t only dull our clarity. It weakens our anchor in reality. Gradually, our perception of the world is shaped by reflections, summaries, algorithmic suggestions. What we think we know is no longer lived, but reconstructed.
Data autophagy… toward forgetting the real
AI models learn by analyzing what they are shown — what we call training data. But what happens when the data they are fed… has already been generated by other AIs? When the original texts have been replaced by copies of copies, by algorithmic rephrasings looping endlessly?
AI ends up feeding on itself. It recycles syntheses that were never truly thought. And as the loop tightens, what circulates is no longer knowledge. It’s the illusion of depth. A mimicry of understanding.
This phenomenon has a name: data autophagy.
Autophagy, biologically speaking, is a useful process. It allows our cells to recycle what is worn, damaged, or unnecessary. But when it goes haywire, it spirals. The cell starts digesting itself. If it goes too far, it turns from cleaning into self-destruction. It attacks its own tissues and hollows itself out.
With AI, the same thing can happen… in digital form. It spins in circles, feeds on itself, forgets the source. And drags us with it. It no longer transmits knowledge, it mimics it. Each cycle adds a layer of polish, a deceptive coherence, but removes substance, complexity, doubt.
What remains? A kind of cultural varnish that reassures, that gives the illusion of knowledge without the rigor of learning. The reader wanders through a spectral library, filled with texts never truly written, with thoughts never truly thought. A set of knowledge. Empty of any human intention.
It’s not just our memory or our critical sense that erodes. It’s our capacity to want, to initiate, to decide. AI doesn’t just answer. It anticipates, it suggests, it subtly guides our intentions.
The drift from attention… to intention
Today, AI no longer just waits for our questions. It anticipates them. It suggests, recommends, pre-selects. Our intentions are shaped before they even form. We no longer search: it steers us, subtly, toward what it deems relevant. Our desires become validated predictions.
We have shifted from an economy of attention to an economy of intention. AI no longer just captures our gaze: it guesses what we will want, sometimes before we do. And that shift is anything but neutral.
When it suggests an action, content, or treatment, it seems to respond to a need. But in reality, it nudges us toward what we often end up choosing. What is common becomes the norm, what is normalized becomes obvious. And what is obvious is no longer questioned.
Take a simple example from the medical field: a prescription software displays “Treatment X might be suitable.” That sentence seems neutral but subtly nudges. If we stop questioning, we start following. Critical thinking fades quietly. Behind this apparent neutrality lie logics of influence — commercial, behavioral, or simply technical — that we do not control.
And the smoother it is, the more self-evident it feels. The less we doubt. Yet it is in the discomfort of doubt that thought often arises.
This drift from suggestion to influence, from decision support to subtle steering, is real. How do we ensure that help remains help? That’s not a technical question. It’s an ethical one. And it deserves to be raised in every sector shaped by algorithms.
In the face of this drift, some fields remind us that important decisions require more than data. They require presence, listening, human discernment. Medicine, for example, offers a powerful mirror for what’s at stake.
Resisting by staying human
Medicine gives us a model. Where AI delivers a diagnosis by cross-referencing thousands of data points, the caregiver senses the invisible. They feel, welcome, perceive what is unspoken, what trembles in a voice or fades in a gaze. It is in this capacity for listening, in attention to silence, that the irreplaceable strength of human care resides.
This alliance is what we now call phygital: the union of algorithmic precision and human presence, of cold computation and the warmth of connection. Far from opposing digital and sensibility, the goal is to make them converse. The augmented caregiver isn’t the one who hands control to the machine, but the one who uses it as a lens, an extension of their vigilance.
But this alliance only makes sense if two very concrete conditions are met.
First, we must end fragmentation. Today, the medical world is saturated with systems that can’t talk to each other. The result: scattered data, duplicates, information loss, frustrated teams. Real augmented care requires a fluid, interoperable ecosystem where tools share, connect, collaborate.
Second, trust. Medical confidentiality isn’t a technical detail. It’s an ethical compass. And that trust depends on real data sovereignty: knowing where data is, who accesses it, and for what purpose. The cloud? Often just someone else’s hard drive. And in healthcare, that “someone else” cannot be a black box actor in an economy of intention. Without model transparency, without usage guarantees, no alliance is possible.
In short: no phygital without ethics. No augmented medicine without a common technical foundation, nor without a clear moral contract. Otherwise, we replace care with service. And the patient becomes a segment.
We are developing this phygital model concretely through an initiative called DOCTORIAA. It is not an abstract concept but a tool under construction, designed with and for caregivers, in response to a real need in the field. DOCTORIAA takes the form of a virtual MDT (multidisciplinary team), aimed first at underserved regions, where practitioners often face complex situations alone. It is not a generic AI, nor a faceless platform. It is a personalized clinical companion, one you don’t share with the world, that learns your practice, your constraints, your patients, and doesn’t share your data everywhere.
It remains available, like a second opinion, always discreet, never intrusive. We envisioned this solution not to replace, but to support. Not to standardize, but to empower. Because augmented medicine only makes sense if it strengthens trust, the quality of connection, and the freedom to think.
But medicine is just one mirror. What it reveals is a broader truth: we cannot endlessly rely on machines without risking erasure. What’s at stake is not just diagnostic accuracy or treatment efficiency. It is a way of being in the world.
To stay human is not to reject tools. It is to refuse their dominance. To preserve a space for doubt, for groping, for genuine thought. And that space must be defended not just in hospitals, but everywhere. Within us. At every moment.
To think, again… and always
What if thinking were our last gesture of freedom? In a world that always offers an answer, choosing the question becomes an act of resistance. To think is to slow down, to doubt, to dig, sometimes to get lost, and always to return. It is to reject fast certainties, frictionless syntheses.
To think is to turn inward. To prefer a slow walk in the fog over a paved road. To welcome the discomfort of doubt as fertile ground.
When was the last time you changed your mind because of a doubt? What did you feel? Vertigo? Relief?
To think is also to say: no. No to automatisms. No to rushing. No to simplification. It is to build a judgment that is inhabited, even imperfect, but truly personal.
Because thinking is not a luxury. It is a necessity. A gesture of vigilance, a slow breath in a world that accelerates. A way to be alive, rooted, present.
And maybe the real question isn’t: “What can AI do?” but rather: “How do we remain human in a world amplified by machines?”
The real danger is not that AI will think for us. It is that we stop thinking for ourselves. This is not about fighting it, but about refusing to let it take center stage. The issue is less technological than anthropological.
To think is to stay in control of our choices. To refuse desires that are prewritten, decisions that are guided. It is to embrace complexity, to honor the long view. To oppose the automation of meaning with a resurgence of inner life.
AI can assist, reveal, simplify. But it will never know what silence means, what doubt contains, what a real choice costs.
We must put the human back at the center. Where responsibility begins. Where thought takes root.
And maybe also, to restore to the word “intelligence” what is most human about it: the slowness of discernment, the vertigo of doubt, the joy of understanding.
For that, we will need courage. The courage to think, again and always.