Tag: <span>ETHICS</span>

Last week, I told you about the ants—those quiet beings who hold the world together while others parade on stage. This week again, I won’t be talking about artificial intelligence, robots, algorithms, or generative AI…

Once more, I’m staying in this very human, very intimate vein. Still about us. Always about us. Because before understanding what machines do to our thinking, we might first need to understand what we’ve done to our own capacity to think.

This time, I’m taking you into more subtle, more troubling territory: our relationship with our own ideas. A silent shift that concerns us all, connected or not, technophiles or technophobes.

I promise, starting next week, I’ll resume my “AI in All Its States” series. But for now, let me tell you about this strange thing that happens to us when we stop inhabiting our own questions…

You type a question into your search engine. In 0.3 seconds, you have your answer. Satisfying, right?

Yet… something strange is happening. This bewildering ease might be hiding a deeper transformation in our relationship with thinking.

There was a time when searching was already an act in itself. When not knowing immediately wasn’t a problem to solve, but a space to inhabit. Today, we slide from one answer to the next, from one pre-digested content to another. We validate more than we choose. We apply more than we understand.

But what happens when thinking becomes optional? Between the seductive efficiency of our tools and our old habit of thinking for ourselves, a silent shift is taking place. Not brutal, not visible. Just… comfortable.

The question isn’t whether technology is good or bad. It lies elsewhere, more intimate: do we still recognize our own voice when we think?

OPINION COLUMN

What if one day, your car made a decision for you… and got it wrong?

A fictional trial once tried to answer a question that no longer feels like fiction: can we put an artificial intelligence on trial like we would a human being?

Behind this courtroom drama lies a deeper dilemma about our digital future: who’s to blame when a machine causes a disaster, but no one truly understands how or why?

Still think the infamous “red button” would save you?

Think again.

OPINION

_What if you could whisper into an AI’s ear, without anyone noticing?_

Some researchers did exactly that. Not in a novel, but on arXiv, one of the most respected scientific platforms. By inserting invisible messages into their papers, they discreetly influenced the judgment—not of human readers, but of the AI systems reviewing the submissions.

White text on a white background. Microscopic font size. Hidden instructions.
The reader sees nothing. The AI, however, obeys.

This isn’t just a clever technical trick. It’s a sign of the times.

Because in a world where AIs help us read, choose, and decide—what happens when the AI itself is being manipulated, without our knowledge?
And even more unsettling: what’s left of our free will, if even the information we read has already been preformatted… for the machine that filters our perception?

👉 This article explores a new kind of manipulation. Subtle. Sneaky. Invisible. Yet remarkably effective.

OPINION

260 McDonald’s nuggets in a single order. An Air Canada chatbot lying to a grieving customer. A recruiting algorithm that blacklists everyone over 40.

Welcome to 2024, the year artificial intelligence showed its true colors. And spoiler alert: it’s not pretty.

While everyone was gushing over ChatGPT, companies were brutally discovering a harsh truth: when your machines screw up, YOU pay the price.

Gone are the golden days when you could shrug and mutter “it’s just a computer glitch.” The courts have spoken: your algorithms, your responsibility. End of story.

Europe legislates with the AI Act (180 pages of bureaucratic bliss). The US innovates at breakneck speed. China controls everything. Meanwhile, our companies are discovering that building responsible AI is like flying a fighter jet blindfolded in a thunderstorm.

The most ironic part? This silent revolution won’t just determine who pays for tomorrow’s disasters. It will decide who dominates the global economy for the next 50 years.

So, ready to discover why your next nightmare might go by the sweet name of “algorithm”? 👇

OPINION

While 6-year-old Chinese children are learning to train AI models to recognize insects in their gardens, French kids the same age are discovering… how to open a word processor.

This gap isn’t just a detail. It’s the symptom of a strategic chasm opening before our very eyes.

On one side, China deploys a plan of breathtaking ambition: 12 years of progressive AI learning to transform every citizen into a “digital native.” The result? It already produces 50% of the world’s top AI researchers compared to 18% for the United States.

On the other, France has just decided “once and for all” after… 4 months of consultation that mobilized 500 contributions. Out of 1.2 million people in the national education system. That’s 0.04% of the educational community.

The French verdict? AI will be authorized starting from 8th grade only, with mandatory training of 30 minutes to 1.5 hours maximum to master the “basics of prompting.” Between reminders about server water consumption.

While Beijing trains entire cohorts of children who will grow up with AI as their natural companion, Paris organizes consultations and offers hour-and-a-half micro-modules.

In 10 years, guess who will truly master this technology that’s already redefining global power balances?

History may judge us on our ability to transform a technological revolution… into administrative reform.

OPINION