Tag: <span>ETHICS</span>

What if the rise of AI in medicine did not mark the end of doctors, but the beginning of a new era of care?

Since Hippocrates, physicians have drawn their legitimacy from knowledge. Yet, for the first time in modern history, they are no longer necessarily the ones who know the most. AI diagnoses faster, sees what the human eye cannot, and sometimes even drafts responses that patients find more reassuring than those of a professional.

So, should we fear the disappearance of doctors? Or should we rethink their place, their role, their unique value in a world where expertise is shared between human and machine?

OPINION

Last week, I told you about the ants—those quiet beings who hold the world together while others parade on stage. This week again, I won’t be talking about artificial intelligence, robots, algorithms, or generative AI…

Once more, I’m staying in this very human, very intimate vein. Still about us. Always about us. Because before understanding what machines do to our thinking, we might first need to understand what we’ve done to our own capacity to think.

This time, I’m taking you into more subtle, more troubling territory: our relationship with our own ideas. A silent shift that concerns us all, connected or not, technophiles or technophobes.

I promise, starting next week, I’ll resume my “AI in All Its States” series. But for now, let me tell you about this strange thing that happens to us when we stop inhabiting our own questions…

You type a question into your search engine. In 0.3 seconds, you have your answer. Satisfying, right?

Yet… something strange is happening. This bewildering ease might be hiding a deeper transformation in our relationship with thinking.

There was a time when searching was already an act in itself. When not knowing immediately wasn’t a problem to solve, but a space to inhabit. Today, we slide from one answer to the next, from one pre-digested content to another. We validate more than we choose. We apply more than we understand.

But what happens when thinking becomes optional? Between the seductive efficiency of our tools and our old habit of thinking for ourselves, a silent shift is taking place. Not brutal, not visible. Just… comfortable.

The question isn’t whether technology is good or bad. It lies elsewhere, more intimate: do we still recognize our own voice when we think?

OPINION COLUMN

What if one day, your car made a decision for you… and got it wrong?

A fictional trial once tried to answer a question that no longer feels like fiction: can we put an artificial intelligence on trial like we would a human being?

Behind this courtroom drama lies a deeper dilemma about our digital future: who’s to blame when a machine causes a disaster, but no one truly understands how or why?

Still think the infamous “red button” would save you?

Think again.

OPINION

_What if you could whisper into an AI’s ear, without anyone noticing?_

Some researchers did exactly that. Not in a novel, but on arXiv, one of the most respected scientific platforms. By inserting invisible messages into their papers, they discreetly influenced the judgment—not of human readers, but of the AI systems reviewing the submissions.

White text on a white background. Microscopic font size. Hidden instructions.
The reader sees nothing. The AI, however, obeys.

This isn’t just a clever technical trick. It’s a sign of the times.

Because in a world where AIs help us read, choose, and decide—what happens when the AI itself is being manipulated, without our knowledge?
And even more unsettling: what’s left of our free will, if even the information we read has already been preformatted… for the machine that filters our perception?

👉 This article explores a new kind of manipulation. Subtle. Sneaky. Invisible. Yet remarkably effective.

OPINION

260 McDonald’s nuggets in a single order. An Air Canada chatbot lying to a grieving customer. A recruiting algorithm that blacklists everyone over 40.

Welcome to 2024, the year artificial intelligence showed its true colors. And spoiler alert: it’s not pretty.

While everyone was gushing over ChatGPT, companies were brutally discovering a harsh truth: when your machines screw up, YOU pay the price.

Gone are the golden days when you could shrug and mutter “it’s just a computer glitch.” The courts have spoken: your algorithms, your responsibility. End of story.

Europe legislates with the AI Act (180 pages of bureaucratic bliss). The US innovates at breakneck speed. China controls everything. Meanwhile, our companies are discovering that building responsible AI is like flying a fighter jet blindfolded in a thunderstorm.

The most ironic part? This silent revolution won’t just determine who pays for tomorrow’s disasters. It will decide who dominates the global economy for the next 50 years.

So, ready to discover why your next nightmare might go by the sweet name of “algorithm”? 👇

OPINION