Tag: <span>ETHICS</span>

While 6-year-old Chinese children are learning to train AI models to recognize insects in their gardens, French kids the same age are discovering… how to open a word processor.

This gap isn’t just a detail. It’s the symptom of a strategic chasm opening before our very eyes.

On one side, China deploys a plan of breathtaking ambition: 12 years of progressive AI learning to transform every citizen into a “digital native.” The result? It already produces 50% of the world’s top AI researchers compared to 18% for the United States.

On the other, France has just decided “once and for all” after… 4 months of consultation that mobilized 500 contributions. Out of 1.2 million people in the national education system. That’s 0.04% of the educational community.

The French verdict? AI will be authorized starting from 8th grade only, with mandatory training of 30 minutes to 1.5 hours maximum to master the “basics of prompting.” Between reminders about server water consumption.

While Beijing trains entire cohorts of children who will grow up with AI as their natural companion, Paris organizes consultations and offers hour-and-a-half micro-modules.

In 10 years, guess who will truly master this technology that’s already redefining global power balances?

History may judge us on our ability to transform a technological revolution… into administrative reform.

OPINION

“This AI writes better than I do!”

I hear this sentence at least three times a week. From a marketing director dazzled by ChatGPT. From a graphic designer fascinated by Midjourney. From a student who just discovered that a machine can solve their math exercises in seconds.

And every time, I think to myself: we’ve just crossed an invisible line.

Not the line of technical performance, that’s just computing doing what it’s always done: calculating fast and well. No, we’ve crossed the line of our own devaluation. The one where we start doubting our most human capabilities: thinking, creating, deciding.

As a mathematician who works with AI daily, I see three grand mythological narratives being constructed before our eyes. Three seductive stories that gradually make us abandon something precious: our intellectual autonomy.

The problem isn’t that AI is too performant. It’s that we’re becoming too gullible.

In the lines that follow, I invite you to dissect these three myths with me, myths that are silently redrawing the boundaries of our humanity. Because before knowing what AI can do, it’s about time we remember what we don’t want to lose.

Ready for a little collective exercise in lucidity?

OPINION

💬 It speaks well. It answers fast. It impresses… But it does not think.

Artificial intelligence is not what you believe it is. Not thinking, just predicting. Not a mind, but a statistical echo. And maybe the real danger isn’t AI itself, but what we stop doing because it exists.

🧠 AI doesn’t steal our intelligence. It simply relieves us from using it. And in that relief, a slow erosion begins… one that eats away at our ability to question, to seek, to truly think.

This article is not a manifesto against technology. It’s a plea for thought. An invitation to lucidity. And a warning about what we might lose, without even noticing: our inner freedom.

📖 Read it. Share it. Start a conversation. This isn’t just a text about AI.
It’s a text about you.

OPINION

💡 What if AI biases were nothing more than our own… amplified?

Algorithms have no morals or intentions. But they learn from us. From our data. From our past decisions. And sometimes—without us even realizing it—they inherit our deepest prejudices.

In this excerpt, I invite you to dive into a cartography of our digital missteps: a journey through the invisible biases that quietly shape machine decisions… and already influence our lives. Hiring, credit, justice, healthcare—no sector is spared.

🔍 Whether it’s historical bias, representation gaps, or blind trust in automation, each algorithmic distortion acts like a funhouse mirror reflecting our society. This isn’t just a technical issue—it’s a matter of conscience.

And maybe, to build fairer AI, we first need to take a better look at ourselves.

CERISE & ADA

🔍 Not long ago, I spoke here about the danger of autophagy, that moment when artificial intelligence begins to feed on its own output, endlessly recycling the same ideas and impoverishing the diversity of knowledge.

👉 Cognitive autophagy, when humans feed on impoverished content! : https://www.linkedin.com/pulse/cognitive-autophagy-when-humans-feed-impoverished-content-buschini-d1oje

and

👉 Autophagy, when AI feeds on itself : https://www.linkedin.com/pulse/ai-autophagy-when-feeds-itself-philippe-buschini-sydee

But there’s another, more intimate risk: the risk of losing even the desire to think.

Imagine a knowledge architect. Every day, they sketch, question, connect ideas. Then one day, a machine offers them the blueprints. Clear, fast, seductive. So they tweak them. They approve. But they no longer question.

AI is not attacking us. It’s helping. And that’s precisely where the shift happens. It spares us the effort — and that effort may be all we have left to remain truly human.

🧠 What if the real danger doesn’t lie in the tool… but in the combination of two phenomena?

– An AI looping endlessly on itself.
– Humans who no longer wish to produce anything different.

OPINION