The familiar illusion
Imagine entering an immense library that seems to have neither beginning nor end. The air carries both the scent of yellowed paper and fresh ink. The shelves stretch like corridors of memory, and each book, resting there, seems to wait to be opened, like a door ready to reveal another world.
You pull out a volume at random, open it to the first page and, playfully, pose a question aloud. Immediately, the lines come alive as if touched by an invisible breath. The sentences flow with confidence, each word naturally finding its place, as if an omniscient author, hidden behind the binding, had always held the answer.
Everything appears flawless, and our mind, accustomed to associating ease of expression with solid content, lowers its guard. After all, how could one suspect a text so sure of itself of containing an error?
Yet, if one could peek behind the curtain, an entirely different picture would emerge. None of these books has verified a fact or compared multiple versions of the same event. Their “knowledge” is not nourished by direct experience, but by a skillful reproduction of the forms our answers take. They are arranged echoes, which may by chance coincide with reality… or deviate from it completely.
Generative artificial intelligences function this way. They know how to imitate the music of words, but truth is neither a value nor an objective for them, simply a human concept that eludes them. Until we have fully integrated this idea, we will remain vulnerable to a subtle confusion: mistaking the illusion of knowledge for knowledge itself.
What a generative AI actually does
To understand what’s at stake, let’s forget the reassuring image of a universal library. Let’s replace it with a seemingly childish game: you’re given the beginning of a sentence and your mission is to guess which word comes next. It doesn’t matter whether it’s true or useful, it just needs to harmonize with what precedes it.
This is how large language models function. They don’t consult reality, they predict the word most likely to follow, based on billions of examples. There is no “inner map” of the world in their architecture. As Savcisens and Eliassi-Rad remind us in “The Trilemma of Truth in Large Language Models” (2025), an LLM can distinguish between “true,” “false”… and a third state, “neither true nor false,” which underscores that its relationship to truthfulness is never binary and rarely certain.
This mechanism favors formal coherence rather than truth. When a response is correct, it’s the effect of a fortunate alignment of probabilities, not the result of a will to verify. And when it’s false, it’s not a lie, but the ordinary functioning of the system.
One could say it acts like an improvising actor: everything lies in the apparent ease and coherence. But nothing guarantees that what is performed corresponds to reality. And faced with such a convincing interpretation, we forget that we are witnessing a performance.
The trap of verisimilitude
Our mind often confuses the coherence of a discourse with the truthfulness of what it affirms. Since childhood, we learn to recognize certain signs as guarantees of reliability: an assured tone, fluid sentences, a clear structure. This deeply rooted reflex doesn’t date from the advent of AI.
Plato, through Socrates, in the “Gorgias,” already denounced oratory when it serves persuasion rather than truth. Well-turned speech, he reminded us, can seduce the ear while misleading the mind.
Socrates: “So it is an ignorant person speaking before ignorant people who defeats the expert, when the orator triumphs over the physician? Is this indeed what happens, or is it something else?” Gorgias: “That is it, in this case at least.” Socrates: “With regard to the other arts too, the orator and rhetoric doubtless have the same advantage: rhetoric doesn’t need to know the reality of things; it suffices with a certain process of persuasion that it has invented, so that it appears before ignorant people more learned than the learned.”
Later, Descartes would emphasize, in the “Discourse on Method,” the necessity of not letting oneself be convinced by the logical appearance of reasoning, but to subject it to methodical examination.
The first was never to accept anything as true that I did not know evidently to be such […] and for this to carefully avoid precipitation and prejudice; and to include nothing more in my judgments than what would present itself so clearly and so distinctly to my mind, that I would have no occasion to doubt it.
History abounds with examples where rhetorical ease has supplanted factual rigor: political speeches, commercial arguments, ideological narratives. Fluidity then becomes a veneer that masks the fragility of the foundation, and our natural tendency to associate formal clarity with accuracy opens the door to all manipulations.
Generative artificial intelligences merely amplify this old flaw. They don’t reason, they statistically extend forms of discourse that “sound right.” But this trap, which predates them, reminds us that intellectual vigilance is not just a technical skill, it’s a human discipline, forged over centuries to distinguish what is true from what appears true.
Accepting limits, mastering usage
One might think the solution is simple: improve the data, perfect the algorithms, add technical safeguards. But Xu et al., in “Hallucination is Inevitable” (2025), remind us that this is not a youthful defect: even in a stripped-down theoretical framework, errors remain inevitable. Their demonstration, based on learning theory, shows that there will always be situations where a model will be wrong, regardless of the richness of its training or the power of its architecture.
Wanting to entirely eliminate these failures would amount to demanding that these systems be something other than what they are. One can reduce the frequency of errors, never abolish them. The question is therefore not only to correct, but to learn to journey with these limits, as one accommodates the imperfections of an instrument while learning to draw the best music from it.
Saying that truth doesn’t exist for an AI doesn’t mean it’s useless. An imperfect instrument can be precious, provided one knows how to tune it and in what context to use it. Like a tool in an artisan’s hands, its value doesn’t lie in its intrinsic perfection, but in the accuracy of the gesture that wields it.
It’s not the machine that confers value on what it produces, but the way we orient it, frame it, and confront its results with other sources. In this, AI is not an autonomous subject, but an extension of our intention and our discernment. It doesn’t deliberate, doesn’t weigh true and false: it arranges forms, and it’s up to us to decide what they mean and what they’re worth.
This requires:
- Providing clear and verified context. An AI, deprived of its own intention, cannot fill the gaps we leave. Like an interpreter who translates without knowing the entire scene, it will produce formally correct sentences, but potentially empty of meaning or misaligned.
- Limiting automatic chains that propagate error. In a process where one model’s output becomes another’s input, each imprecision can distort and amplify, like in a game of telephone where the initial statement ends up unrecognizable.
- Systematically verifying critical points. AIs produce texts with stylistic assurance that can mask their errors. A superficial reading is enough to be convinced; an attentive reading allows one to see if the content rests on solid facts or well-turned approximations.
- Preserving our critical thinking. Not a sterile mistrust that rejects on principle, but an active vigilance that questions, cross-references, and puts into perspective. Critical thinking is not a passive filter, it’s a permanent effort to distinguish the clear from the true, verisimilitude from veracity.
Generative AI is not an oracle. It’s a linguistic tool, sometimes inspiring, capable of reformulating concepts, exploring ideas, opening unexpected paths. But the statement of what is and the responsibility that accompanies it, remains, and will remain, the affair of humans.
Truth must not be delegated
Generative AIs are not living libraries, but assemblages of fragments, sometimes correct, sometimes false, sometimes indeterminate. They don’t intend to deceive, any more than they deliberately seek to tell the truth. They extend forms, without necessary connection to the reality they describe.
If we want to avoid confusing verisimilitude with truth, we must cultivate active vigilance. Question what we read. Cross-reference sources. Listen, within ourselves, to that voice that authorizes itself to doubt even when words sound right. It’s an intellectual exercise, but also an ethical commitment.
So, should we suspect all speech, or learn to recognize the subtle music of uncertainty? Perhaps this is the essential step in our collective learning: understanding that truth is not a fixed object that we receive, but a horizon toward which we walk, patiently, each day.
This horizon is not stable, it recedes as we advance. As Kant wrote, truth is not given by raw experience, but constructed by our reason’s effort to organize and examine what presents itself to us. It demands continuous movement, made of verifications, questionings, and confrontations, which never leads to definitive certainty.
Socrates, in his time, reminded us that recognizing one’s ignorance was already a step toward knowledge. In the same way, cultivating attention and discernment means accepting that the clear is not always the true, that evidence can deceive, and that the search for truth is less a state to attain than a practice to maintain, each day, with the patience of an artisan.
And you, in this uninterrupted flow of discourse and images that surrounds you, will you know how to make truth a living quest rather than a simple reassuring reflection?