Tag: <span>BIAISES</span>

Here is the third and final installment of my series on the privacy of our data. After exploring our own surrenders and the illusion of voluntary transparency, it’s time to ask the most unsettling question of all:

What are we leaving to our children? Not as a material inheritance, but as an inheritance of gaze.

For they are born into a world where the intimate fades before it has even existed, where surveillance dresses itself in the clothing of play, where freedom is confused with permanent connection. What was for us a loss is for them self-evident. Where we see an encroachment on privacy, they simply see life.

This article examines this silent shift: how do we pass down inner freedom to a generation that has never known secrecy? How do we teach depth to those we’ve accustomed to exposure? And above all, what will remain of freedom if we forget to teach it to them?

OPINION

Last week, I talked about a point that’s often misunderstood: for AI, truth doesn’t exist.

Today, I’m taking the reasoning one step further. Because there’s an even deeper misconception: believing that an LLM is a knowledge base. It’s not. A language model generates probable word sequences, not verified facts. In other words, it recites with ease, but it never cites.

That’s exactly what I explore in my new article: why this confusion persists, and how to clearly distinguish between parametric memory and explicit memory, so we can finally combine them the right way.

OPINION

We’ve painted the word kindness in so many pastel colors that it’s become unrecognizable.
Today, it’s more often a smokescreen than a value, a cover-up to justify inaction, weakness, even cowardice.

Saying NO is now suspicious, setting boundaries is seen as toxic, demanding effort is considered violent. The result? Empty papers get applause, nothingness is celebrated as brilliance, and we dare to call it kindness.

But if protecting, loving, educating, and working together still mean anything, then it’s time to remember that real kindness doesn’t always stroke in the right direction. It protects by being clear-eyed, it builds by being demanding.

OPINION COLUMN

An AI doesn’t lie. But it doesn’t tell the truth either. It doesn’t know what is true or false — it only calculates probabilities. Its “reasoning” boils down to predicting which word is most likely to follow the previous one, based on the billions of sentences it has been trained on.

The result can be dazzling: fluid, elegant, convincing. Yet this fluency is nothing more than an illusion. What we read is not verified knowledge, but a sequence of words that “fit.” Sometimes accurate, sometimes wrong, sometimes neither — without the machine ever being aware of it.

The real danger is not the AI itself, but our very human reflex: confusing coherence with truth. In other words, mistaking the appearance of knowledge for knowledge itself. It’s this subtle, almost invisible shift that opens the door to confusion, born of ignorance about how it works, and of overconfidence in what merely “sounds right.”

OPINION

_What if you could whisper into an AI’s ear, without anyone noticing?_

Some researchers did exactly that. Not in a novel, but on arXiv, one of the most respected scientific platforms. By inserting invisible messages into their papers, they discreetly influenced the judgment—not of human readers, but of the AI systems reviewing the submissions.

White text on a white background. Microscopic font size. Hidden instructions.
The reader sees nothing. The AI, however, obeys.

This isn’t just a clever technical trick. It’s a sign of the times.

Because in a world where AIs help us read, choose, and decide—what happens when the AI itself is being manipulated, without our knowledge?
And even more unsettling: what’s left of our free will, if even the information we read has already been preformatted… for the machine that filters our perception?

👉 This article explores a new kind of manipulation. Subtle. Sneaky. Invisible. Yet remarkably effective.

OPINION