Category: <span>OPINION</span>

“This AI writes better than I do!”

I hear this sentence at least three times a week. From a marketing director dazzled by ChatGPT. From a graphic designer fascinated by Midjourney. From a student who just discovered that a machine can solve their math exercises in seconds.

And every time, I think to myself: we’ve just crossed an invisible line.

Not the line of technical performance, that’s just computing doing what it’s always done: calculating fast and well. No, we’ve crossed the line of our own devaluation. The one where we start doubting our most human capabilities: thinking, creating, deciding.

As a mathematician who works with AI daily, I see three grand mythological narratives being constructed before our eyes. Three seductive stories that gradually make us abandon something precious: our intellectual autonomy.

The problem isn’t that AI is too performant. It’s that we’re becoming too gullible.

In the lines that follow, I invite you to dissect these three myths with me, myths that are silently redrawing the boundaries of our humanity. Because before knowing what AI can do, it’s about time we remember what we don’t want to lose.

Ready for a little collective exercise in lucidity?

OPINION

💬 It speaks well. It answers fast. It impresses… But it does not think.

Artificial intelligence is not what you believe it is. Not thinking, just predicting. Not a mind, but a statistical echo. And maybe the real danger isn’t AI itself, but what we stop doing because it exists.

🧠 AI doesn’t steal our intelligence. It simply relieves us from using it. And in that relief, a slow erosion begins… one that eats away at our ability to question, to seek, to truly think.

This article is not a manifesto against technology. It’s a plea for thought. An invitation to lucidity. And a warning about what we might lose, without even noticing: our inner freedom.

📖 Read it. Share it. Start a conversation. This isn’t just a text about AI.
It’s a text about you.

OPINION

🔍 Not long ago, I spoke here about the danger of autophagy, that moment when artificial intelligence begins to feed on its own output, endlessly recycling the same ideas and impoverishing the diversity of knowledge.

👉 Cognitive autophagy, when humans feed on impoverished content! : https://www.linkedin.com/pulse/cognitive-autophagy-when-humans-feed-impoverished-content-buschini-d1oje

and

👉 Autophagy, when AI feeds on itself : https://www.linkedin.com/pulse/ai-autophagy-when-feeds-itself-philippe-buschini-sydee

But there’s another, more intimate risk: the risk of losing even the desire to think.

Imagine a knowledge architect. Every day, they sketch, question, connect ideas. Then one day, a machine offers them the blueprints. Clear, fast, seductive. So they tweak them. They approve. But they no longer question.

AI is not attacking us. It’s helping. And that’s precisely where the shift happens. It spares us the effort — and that effort may be all we have left to remain truly human.

🧠 What if the real danger doesn’t lie in the tool… but in the combination of two phenomena?

– An AI looping endlessly on itself.
– Humans who no longer wish to produce anything different.

OPINION

What if our healthcare system wasn’t failing from lack of resources—but from a lack of connection?

Picture this: an on-call doctor tries to retrieve the medical history of an unconscious patient. The information exists, somewhere. A clinic. A GP’s notes. A specialist’s report. But nothing flows. No system talks to another.

That’s the true cost of missing interoperability.

It may sound technical, but it’s actually one of the most powerful levers for transforming healthcare. At the crossroads of medical ethics, digital sovereignty, and human-centered innovation.

In this article, I explore why making health data circulate securely and meaningfully is not a futuristic luxury, but a critical foundation. And more importantly, how we can start doing things differently, as caregivers, patients, institutions, and technologists.

OPINION

What if we’re becoming the zombies of a corrupted knowledge?

Every day, without even realizing it, we scroll, click, like… feeding our minds a lukewarm stew of recycled information, mass-produced and stripped of its substance.

It’s no longer just AI looping endlessly through its own soup of synthetic content — it’s us. Our brains, once curious, agile, and eager for complexity, now settle for digital crumbs pre-chewed by machines.

The result?

A thought process that’s impoverished and standardized, slowly losing its ability to tell truth from falsehood, depth from superficiality.

OPINION