Philippe Buschini Posts

What if the rise of AI in medicine did not mark the end of doctors, but the beginning of a new era of care?

Since Hippocrates, physicians have drawn their legitimacy from knowledge. Yet, for the first time in modern history, they are no longer necessarily the ones who know the most. AI diagnoses faster, sees what the human eye cannot, and sometimes even drafts responses that patients find more reassuring than those of a professional.

So, should we fear the disappearance of doctors? Or should we rethink their place, their role, their unique value in a world where expertise is shared between human and machine?

OPINION

Last week, I talked about a point that’s often misunderstood: for AI, truth doesn’t exist.

Today, I’m taking the reasoning one step further. Because there’s an even deeper misconception: believing that an LLM is a knowledge base. It’s not. A language model generates probable word sequences, not verified facts. In other words, it recites with ease, but it never cites.

That’s exactly what I explore in my new article: why this confusion persists, and how to clearly distinguish between parametric memory and explicit memory, so we can finally combine them the right way.

OPINION

We’ve painted the word kindness in so many pastel colors that it’s become unrecognizable.
Today, it’s more often a smokescreen than a value, a cover-up to justify inaction, weakness, even cowardice.

Saying NO is now suspicious, setting boundaries is seen as toxic, demanding effort is considered violent. The result? Empty papers get applause, nothingness is celebrated as brilliance, and we dare to call it kindness.

But if protecting, loving, educating, and working together still mean anything, then it’s time to remember that real kindness doesn’t always stroke in the right direction. It protects by being clear-eyed, it builds by being demanding.

OPINION COLUMN

An AI doesn’t lie. But it doesn’t tell the truth either. It doesn’t know what is true or false — it only calculates probabilities. Its “reasoning” boils down to predicting which word is most likely to follow the previous one, based on the billions of sentences it has been trained on.

The result can be dazzling: fluid, elegant, convincing. Yet this fluency is nothing more than an illusion. What we read is not verified knowledge, but a sequence of words that “fit.” Sometimes accurate, sometimes wrong, sometimes neither — without the machine ever being aware of it.

The real danger is not the AI itself, but our very human reflex: confusing coherence with truth. In other words, mistaking the appearance of knowledge for knowledge itself. It’s this subtle, almost invisible shift that opens the door to confusion, born of ignorance about how it works, and of overconfidence in what merely “sounds right.”

OPINION

📌 Friday mood post 📌

BREAKTHROUGH: I’ve Discovered the Holy Grail of Disruptive Eco-Responsibility

My friends, we’re living in MAGICAL times.

I just witnessed a company that received the “Climatically Transcended Enterprise” label because they replaced plastic cups with… recycled cardboard cups… imported from Japan. By plane. In plastic packaging.

But wait, it gets BRILLIANT:

Their “Chief Happiness & Carbon Offset Officer” (yes, that’s a real title) explains that their 3D printer running 24/7 is now “carbon neutral” thanks to a “Symbiotic Impact Partnership” with a Bolivian farmer who promised NOT to cut down a tree.

Which one? We don’t know. Where? Trade secret.

And the cherry on top: their upcoming 47-person meeting in Dubai to discuss “Digital Sobriety” will be offset by purchasing “3.7 square meters of Amazonian forest benevolence.”

Via a mobile app, naturally.

OPINION COLUMN