Tag: <span>AI</span>

Last week, I talked about a point that’s often misunderstood: for AI, truth doesn’t exist.

Today, I’m taking the reasoning one step further. Because there’s an even deeper misconception: believing that an LLM is a knowledge base. It’s not. A language model generates probable word sequences, not verified facts. In other words, it recites with ease, but it never cites.

That’s exactly what I explore in my new article: why this confusion persists, and how to clearly distinguish between parametric memory and explicit memory, so we can finally combine them the right way.

OPINION

An AI doesn’t lie. But it doesn’t tell the truth either. It doesn’t know what is true or false — it only calculates probabilities. Its “reasoning” boils down to predicting which word is most likely to follow the previous one, based on the billions of sentences it has been trained on.

The result can be dazzling: fluid, elegant, convincing. Yet this fluency is nothing more than an illusion. What we read is not verified knowledge, but a sequence of words that “fit.” Sometimes accurate, sometimes wrong, sometimes neither — without the machine ever being aware of it.

The real danger is not the AI itself, but our very human reflex: confusing coherence with truth. In other words, mistaking the appearance of knowledge for knowledge itself. It’s this subtle, almost invisible shift that opens the door to confusion, born of ignorance about how it works, and of overconfidence in what merely “sounds right.”

OPINION

OPINION

Last week, I told you about the ants—those quiet beings who hold the world together while others parade on stage. This week again, I won’t be talking about artificial intelligence, robots, algorithms, or generative AI…

Once more, I’m staying in this very human, very intimate vein. Still about us. Always about us. Because before understanding what machines do to our thinking, we might first need to understand what we’ve done to our own capacity to think.

This time, I’m taking you into more subtle, more troubling territory: our relationship with our own ideas. A silent shift that concerns us all, connected or not, technophiles or technophobes.

I promise, starting next week, I’ll resume my “AI in All Its States” series. But for now, let me tell you about this strange thing that happens to us when we stop inhabiting our own questions…

You type a question into your search engine. In 0.3 seconds, you have your answer. Satisfying, right?

Yet… something strange is happening. This bewildering ease might be hiding a deeper transformation in our relationship with thinking.

There was a time when searching was already an act in itself. When not knowing immediately wasn’t a problem to solve, but a space to inhabit. Today, we slide from one answer to the next, from one pre-digested content to another. We validate more than we choose. We apply more than we understand.

But what happens when thinking becomes optional? Between the seductive efficiency of our tools and our old habit of thinking for ourselves, a silent shift is taking place. Not brutal, not visible. Just… comfortable.

The question isn’t whether technology is good or bad. It lies elsewhere, more intimate: do we still recognize our own voice when we think?

OPINION COLUMN

What if one day, your car made a decision for you… and got it wrong?

A fictional trial once tried to answer a question that no longer feels like fiction: can we put an artificial intelligence on trial like we would a human being?

Behind this courtroom drama lies a deeper dilemma about our digital future: who’s to blame when a machine causes a disaster, but no one truly understands how or why?

Still think the infamous “red button” would save you?

Think again.

OPINION