Tag: <span>AI</span>

What if the rise of AI in medicine did not mark the end of doctors, but the beginning of a new era of care?

Since Hippocrates, physicians have drawn their legitimacy from knowledge. Yet, for the first time in modern history, they are no longer necessarily the ones who know the most. AI diagnoses faster, sees what the human eye cannot, and sometimes even drafts responses that patients find more reassuring than those of a professional.

So, should we fear the disappearance of doctors? Or should we rethink their place, their role, their unique value in a world where expertise is shared between human and machine?

OPINION

Last week, I talked about a point that’s often misunderstood: for AI, truth doesn’t exist.

Today, I’m taking the reasoning one step further. Because there’s an even deeper misconception: believing that an LLM is a knowledge base. It’s not. A language model generates probable word sequences, not verified facts. In other words, it recites with ease, but it never cites.

That’s exactly what I explore in my new article: why this confusion persists, and how to clearly distinguish between parametric memory and explicit memory, so we can finally combine them the right way.

OPINION

An AI doesn’t lie. But it doesn’t tell the truth either. It doesn’t know what is true or false — it only calculates probabilities. Its “reasoning” boils down to predicting which word is most likely to follow the previous one, based on the billions of sentences it has been trained on.

The result can be dazzling: fluid, elegant, convincing. Yet this fluency is nothing more than an illusion. What we read is not verified knowledge, but a sequence of words that “fit.” Sometimes accurate, sometimes wrong, sometimes neither — without the machine ever being aware of it.

The real danger is not the AI itself, but our very human reflex: confusing coherence with truth. In other words, mistaking the appearance of knowledge for knowledge itself. It’s this subtle, almost invisible shift that opens the door to confusion, born of ignorance about how it works, and of overconfidence in what merely “sounds right.”

OPINION

OPINION

Last week, I told you about the ants—those quiet beings who hold the world together while others parade on stage. This week again, I won’t be talking about artificial intelligence, robots, algorithms, or generative AI…

Once more, I’m staying in this very human, very intimate vein. Still about us. Always about us. Because before understanding what machines do to our thinking, we might first need to understand what we’ve done to our own capacity to think.

This time, I’m taking you into more subtle, more troubling territory: our relationship with our own ideas. A silent shift that concerns us all, connected or not, technophiles or technophobes.

I promise, starting next week, I’ll resume my “AI in All Its States” series. But for now, let me tell you about this strange thing that happens to us when we stop inhabiting our own questions…

You type a question into your search engine. In 0.3 seconds, you have your answer. Satisfying, right?

Yet… something strange is happening. This bewildering ease might be hiding a deeper transformation in our relationship with thinking.

There was a time when searching was already an act in itself. When not knowing immediately wasn’t a problem to solve, but a space to inhabit. Today, we slide from one answer to the next, from one pre-digested content to another. We validate more than we choose. We apply more than we understand.

But what happens when thinking becomes optional? Between the seductive efficiency of our tools and our old habit of thinking for ourselves, a silent shift is taking place. Not brutal, not visible. Just… comfortable.

The question isn’t whether technology is good or bad. It lies elsewhere, more intimate: do we still recognize our own voice when we think?

OPINION COLUMN