AI Myths and Their Power of Dispossession

Foreword

As a mathematician and AI researcher, I use this technology daily. I understand its real capabilities, limitations, and more or less legitimate promises. I’m not among those who advocate rejection on principle, nor do I harbor sterile nostalgia for a world “from before.”

What worries me is seeing how easily we embrace the narratives propagated by the new evangelists of AI. These authoritative voices, often without solid technical training, who multiply grandiose promises on social networks and at conferences. As if their communicative enthusiasm served as proof. As if technical performance alone justified adoption without questioning, as if algorithmic sophistication exempted us from thinking about the implications of our usage. The problem isn’t artificial intelligence per se, but this collective abdication of our critical thinking in the face of technological promises, this tendency to confuse innovation with progress, efficiency with relevance.

Between Collective Fascination and Narrative Construction

In a meeting room with clean lines, three gazes converge on a screen. The marketing director mentions spectacular productivity gains, the innovation manager predicts the end of creative professions, the consultant nods in agreement. Three perspectives, three beliefs, three projections onto the same technological reality. And already, without realizing it, three ways of abandoning part of their critical judgment in favor of a seductive narrative.

For artificial intelligence never progresses alone. It advances accompanied by a procession of images, promises, and narratives that shape our relationship to the world long before we touch any keyboard. Like any major innovation, it surrounds itself with what sociologists call a mythological ecosystem: a set of representations that orient perceptions, shape usage, and above all, progressively redefine what we consider to fall within our own competence.

These myths don’t emerge spontaneously. They are cultivated, maintained, sometimes instrumentalized by those who have an interest in us adopting these technologies without questioning them too much. They help create adherence around a product, reassure investors, convince decision-makers, and capture media attention. But they also operate a more subtle transformation: they redraw the boundaries between what we think we should do ourselves and what we can delegate without risk.

Unraveling these narratives isn’t a purely intellectual exercise. It’s taking back power over our representations, refusing to let others define the contours of our autonomy for us. Because behind each myth lies a particular conception of what intelligence, creativity, and learning are – and therefore of what we are.

Here are three of these great contemporary myths, analyzed not only in their technical dimension, but in what they reveal about our successive renunciations. Because these narratives are never neutral: they form the cultural grammar of an era learning to divest itself of its own capacities in the name of efficiency.

The Myth of Authentic Intelligence or the Dispossession of Our Judgment

It regularly appears in headlines: “This AI surpasses human intelligence.” It slips into conversational interfaces through phrases like “I think that…” or “In my opinion…” And it’s precisely there that the first abandonment occurs: the confusion between statistical performance and cognitive process leads us to treat these systems as legitimate interlocutors, worthy of our intellectual trust.

Just yesterday, a friend told me: “ChatGPT explained a legal concept to me better than my lawyer.” Same reaction from this student: “The AI understands my questions, it responds like a personal tutor.” In both cases, the trap has closed: we confuse a well-formulated response with genuine understanding.

Let’s observe what’s actually happening. When a student asks ChatGPT to solve a complex problem, they’re not just using a tool. They tacitly accept that understanding and producing are two separable activities, that the accuracy of an answer doesn’t depend on the process that generated it. They unknowingly renounce the effort of constructing reasoning that constitutes the very essence of learning.

A generative artificial intelligence doesn’t understand in the sense we mean it; it reproduces patterns. It doesn’t think; it calculates probabilities. Its “intelligence” constitutes an operational but misleading metaphor. It relies on the ability to identify regularities in immense data corpora to generate coherent responses. Nothing allows us to detect the emergence of consciousness or even intentionality.

Consider a revealing example: an AI can produce a remarkable essay on Platonic philosophy, mobilizing the right references, articulating concepts accurately. But it experiences no personal relationship to the ideas it manipulates. It reproduces discursive structures without inhabiting their meaning, without ever having struggled with the resistance of thought, without ever having experienced the sudden illumination that accompanies true understanding.

The myth of intelligence rests on a fallacious analogy: because the result resembles that of a human process, the underlying mechanism would be similar. But a photograph doesn’t see for all that. And by treating AI as a genuine thinker, we lose sight of what makes our own intelligence specific: its capacity to doubt, to contradict itself, to take detours, to experience the resistance of reality.

This representation has implications that go beyond simple technical misunderstanding. It encourages perceiving AI as an autonomous entity, more reliable than our own judgment because it knows neither fatigue nor emotion. It obscures the complex chain of dependencies: original data, algorithmic choices, human interventions that regulate, filter, and orient. It maintains a fascination that can inhibit the critical analysis necessary for thoughtful appropriation of these technologies.

But above all, it gradually accustoms us to considering human thought as a flawed process, too slow, too approximate. And this devaluation of our own cognitive capacity prepares the ground for other abandonments.

The Myth of Artificial Creativity or the Dispossession of Our Imagination

“This AI composes symphonies,” “This system generates original poems,” “It paints in the style of the masters.” The effect produced is striking, almost magical. But let’s observe what’s happening in the mind of one who contemplates these generated works: fascination mixed with vague unease. If a machine can create, what remains specifically human in the creative act?

Just watch an art director browsing through hundreds of images generated by Midjourney: “It’s amazing, I got in 5 minutes what would have taken me hours.” Or this musician discovering AIVA’s compositions: “It’s troubling, it sounds like real Mozart.” In both cases, we measure the work by its result, never by the process that gave birth to it.

This question reveals the formidable effectiveness of the myth of artificial creativity. By dazzling us with its productions, it makes us forget what constitutes the very essence of human creation: the subjective experience of the world that nourishes it.

Human creativity is rooted in an embodied experience, made of sensations, emotions, memories. It proceeds through trial and error, breaks, intuitions. It’s anchored in singular lived experience, cultural references, a specific social context. When Basquiat paints, he’s not recomposing pre-existing elements according to statistical probabilities. He’s transposing a worldview forged by his experience as a young Black man in 1980s America, by his readings, encounters, wounds.

Artificial intelligences generate probabilistic combinations of pre-existing elements. They excel at imitation, variation, recombination. But they don’t live any of what they produce. They’ve never contemplated a sunset with that particular melancholy born from awareness of our finitude. They’ve never experienced that creative urgency that drives the artist to bear witness to their era.

Where humans transgress, explore, or subvert codes out of inner necessity, AI optimizes assemblages according to efficiency criteria. It constitutes a powerful variation generator, but it doesn’t produce genuine ruptures. The novelty it generates remains calculated, never truly emergent.

The myth of artificial creativity rests on confusion between the work produced and the intimate dynamic of its creation. This confusion operates in both directions: it overvalues technical performance while devaluing the human creative act, reduced to a series of reproducible operations.

When a graphic designer uses Midjourney to generate visuals “in the style of,” they’re not just adopting a more efficient tool. They implicitly accept that style can be detached from the approach that gave birth to it, that aesthetics can be separated from ethics, that form can exist independently of meaning. They renounce that inner questioning that makes a creation bear the trace of its creator.

By adhering to this myth, we risk forgetting that creation also constitutes a cultural act, sometimes political or subversive. However, artificial intelligence claims nothing, contests nothing, engages in no critical approach. It only has statistical correlations between words, images, sounds, without ever accessing the meaning or intention that underlies them.

And this disembodiment of creativity prepares the ground for a third renunciation, perhaps the most pernicious: that of our capacity for action in the present.

The Myth of the Self-Fulfilling Promise or the Dispossession of Our Action

This is perhaps the most powerful and structuring of the three. It doesn’t describe what AI accomplishes today, but what it will accomplish tomorrow. It prophesies, anticipates, projects. We speak of “emergence,” “qualitative leaps,” “generalization.” And always in a near but indeterminate future that suspends our critical judgment.

Listen to these coffee machine conversations: “Why learn Excel? In six months, AI will do everything.” Or this business leader: “No point hiring, soon AI will replace the entire creative team.” Each time, the hypothetical future justifies present inaction.

This prospective myth operates a subtle transformation of our relationship to time and action. By projecting us into a horizon of imminent technological revolutions, it installs us in a waiting posture that inhibits our capacity for present intervention. Why question the current limitations of a system when we’re assured they’ll soon be overcome? Why reflect on the ethical implications of a technology when we’re promised it will tomorrow be more intelligent, safer, more aligned with our values?

Let’s observe the effect on our daily decisions. The training manager who hesitates to invest in human training “because AI will change everything.” The student who gives up deepening a discipline “because machines will do better.” The artist who doubts the relevance of their work “because AI will soon create everything.” In each of these cases, the promise of a radiant future produces present paralysis.

This prospective myth functions as a remarkably effective rhetorical device. It allows deferring criticism in the name of “not yet accomplished.” It offers industrial actors an argument to circumvent regulations, invoking the urgency of innovation. It masks current limitations under the promise of continuous and exponential improvements. But above all, it dispossesses us of our power to orient these technologies according to our own values.

Yet we forget a fundamental technical truth: AI is never a universal solution. It responds efficiently to circumscribed problems, but it always remains dependent on its training data and human framing. To consider it as indefinitely transposable is to deny its fundamental dependence on context and interpretation.

Multiplying biased data doesn’t neutralize bias but sophisticates it. Increasing computational power doesn’t dissipate the opacity of models but makes it more complex. And we too often confuse performances demonstrated in controlled environments with general capabilities adaptable to any situation.

The prospective myth functions as a permanent flight forward that deprives us of our capacity for critical intervention. It projects us into a horizon of promises, both seductive and paralyzing. It favors a waiting posture rather than present action, critical analysis, and thoughtful appropriation.

And this is perhaps the most serious: by installing us in waiting for a technological revolution, it makes us forget that we have the power, here and now, to define the conditions under which we want to live with these technologies.

Because by constantly yielding to these narratives, another shift occurs, even more silent: that of our imagination, our capacity to write our own story.

Once these three narratives are laid bare, what remains? The urgent need to take back the thread of our narrative autonomy.

Taking Back Power Over Our Representations

These three myths don’t constitute simple technical misunderstandings. They function as instruments of symbolic power that shape our collective representations, orient our decisions, and progressively shape our conception of what still falls under our human responsibility.

They act as structuring metaphors, to use George Lakoff’s expression. And like any metaphor, they carve up reality, make certain aspects visible while obscuring others. The user believes they master the system because they’re told about “natural interaction.” They give credit to the model because they’re told it “creates.” They suspend their critical judgment because they’re promised tomorrow will be different.

But these metaphors also operate a deeper transformation of our relationship to ourselves. By exclusively valorizing efficiency, speed, massive production, they make us progressively doubt the value of our own cognitive processes. Our slowness becomes a flaw, our hesitations weaknesses, our detours waste of time.

Yet it’s precisely in this slowness, these hesitations, these detours that our thinking humanity is rooted. Human intelligence isn’t linear, nor always efficient. It’s often muddled, intuitive, and that’s what makes it rich. It’s made of fertile dead ends, revealing detours, creative recoveries. It doesn’t flourish in speed, but in the depth of lived experience.

It’s precisely here that critical thinking becomes indispensable. Not to reject artificial intelligence, but to understand it beyond its narrative packaging. To distinguish what pertains to technical performance, media construction, and influence strategies. To keep alive our capacity to question, to doubt, to resist prefabricated evidence.

Because as sociologist Dominique Boullier emphasizes, “techniques never speak alone.” They’re always accompanied by discourse, staging, narrative. And it’s often this narrative that determines our beliefs and behaviors, much more than lines of code or algorithmic architectures.

But understanding these mechanisms isn’t enough. Once the mechanisms are laid bare, the most difficult question remains: how, concretely, do we resist the attraction of these narrations when they promise us so much efficiency?

Cultivating the Art of Doubt

This understanding brings us face to face with a more fundamental question: in a world that proposes we delegate more and more of our cognitive capacities, do we still have the will to think for ourselves? Actively. Freely. Slowly, sometimes.

Because that’s what it’s about: no longer just knowing if AI thinks in our place, but knowing if we still want to think. Not because a task requires it, but because an inner impulse drives us to it. Because we consider thought as constitutive of our human dignity, and not as a simple means to achieve a productive end.

This will to think isn’t decreed, it’s cultivated. It’s maintained in daily confrontation with received ideas, in the discomfort of doubt, in the patience of analysis. It’s rooted in attentive reading, contradictory debates, hours spent unraveling the true from the seductive. It supposes accepting effort, slowness, imperfection as irreducible dimensions of human experience.

We must relearn to write by hand, not out of nostalgia but because the gesture slows thought and allows it to unfold. We must relearn to wander in a book, not to understand at first glance, to start over, to search. We must claim the right to inefficiency as a form of resistance to the general acceleration of the world.

This resistance isn’t a privilege reserved for a few, but a common right: that of thinking freely, imperfectly, intensely. The right not to know, to search, to start again. The right to consider that the quest for understanding has value in itself, independent of its productive utility.

Because ultimately, demystifying these technological narratives isn’t just an intellectual exercise. It’s a political act that constitutes us as thinking subjects, capable of discernment, masters of our representations. It’s what allows us not simply to adhere, but to choose, to nuance, to resist.

The Choice of Intellectual Survival

If we yield too easily to the prefabricated narratives that accompany these technologies, we might well lose not only our capacity to deconstruct myths, but even the desire to do so. And that would be an irreparable loss, much more damaging than any technical failure.

Because it’s through the exercise of doubt that we remain free. That we keep our singularity. That we become, sometimes, lucidly insubordinate to dominant narrations. In a world increasingly saturated with automatically generated content, instant responses, and algorithmic certainties, keeping this questioning capacity alive becomes an issue of intellectual survival.

So let’s ask the question directly: in this world of seductive narrations and generalized efficiency, what are we still ready to devote time, attention, vigilance to? Are we capable of claiming the effort of demystification as a necessary form of resistance?

The answer to this question could well condition the future of our intellectual autonomy. And perhaps even that of our conscious humanity, capable of thinking the world rather than simply enduring it.