Tag: <span>BIAISES</span>

We’ve painted the word kindness in so many pastel colors that it’s become unrecognizable.
Today, it’s more often a smokescreen than a value, a cover-up to justify inaction, weakness, even cowardice.

Saying NO is now suspicious, setting boundaries is seen as toxic, demanding effort is considered violent. The result? Empty papers get applause, nothingness is celebrated as brilliance, and we dare to call it kindness.

But if protecting, loving, educating, and working together still mean anything, then it’s time to remember that real kindness doesn’t always stroke in the right direction. It protects by being clear-eyed, it builds by being demanding.

OPINION COLUMN

An AI doesn’t lie. But it doesn’t tell the truth either. It doesn’t know what is true or false — it only calculates probabilities. Its “reasoning” boils down to predicting which word is most likely to follow the previous one, based on the billions of sentences it has been trained on.

The result can be dazzling: fluid, elegant, convincing. Yet this fluency is nothing more than an illusion. What we read is not verified knowledge, but a sequence of words that “fit.” Sometimes accurate, sometimes wrong, sometimes neither — without the machine ever being aware of it.

The real danger is not the AI itself, but our very human reflex: confusing coherence with truth. In other words, mistaking the appearance of knowledge for knowledge itself. It’s this subtle, almost invisible shift that opens the door to confusion, born of ignorance about how it works, and of overconfidence in what merely “sounds right.”

OPINION

_What if you could whisper into an AI’s ear, without anyone noticing?_

Some researchers did exactly that. Not in a novel, but on arXiv, one of the most respected scientific platforms. By inserting invisible messages into their papers, they discreetly influenced the judgment—not of human readers, but of the AI systems reviewing the submissions.

White text on a white background. Microscopic font size. Hidden instructions.
The reader sees nothing. The AI, however, obeys.

This isn’t just a clever technical trick. It’s a sign of the times.

Because in a world where AIs help us read, choose, and decide—what happens when the AI itself is being manipulated, without our knowledge?
And even more unsettling: what’s left of our free will, if even the information we read has already been preformatted… for the machine that filters our perception?

👉 This article explores a new kind of manipulation. Subtle. Sneaky. Invisible. Yet remarkably effective.

OPINION

💡 What if AI biases were nothing more than our own… amplified?

Algorithms have no morals or intentions. But they learn from us. From our data. From our past decisions. And sometimes—without us even realizing it—they inherit our deepest prejudices.

In this excerpt, I invite you to dive into a cartography of our digital missteps: a journey through the invisible biases that quietly shape machine decisions… and already influence our lives. Hiring, credit, justice, healthcare—no sector is spared.

🔍 Whether it’s historical bias, representation gaps, or blind trust in automation, each algorithmic distortion acts like a funhouse mirror reflecting our society. This isn’t just a technical issue—it’s a matter of conscience.

And maybe, to build fairer AI, we first need to take a better look at ourselves.

CERISE & ADA

In a world where our smartphones understand us better than our loved ones, where our virtual assistants listen without ever yawning from boredom, a new form of escape has been born.

When applications promise us “a girlfriend who understands you perfectly, without the complications of real life” for $200 per month, isn’t it time to question what we’re really trying to escape from?

From digital infidelity to virtual hugs, let’s explore together this troubling frontier where our creations become our favorite creatures… and where we risk becoming machines ourselves through interacting with them.

OPINION COLUMN