Philippe Buschini Posts

How many times have you said this phrase while mindlessly accepting cookies on a website?

Yesterday morning, I watched my daughter checking her phone. A simple, innocent gesture. Yet in just a few seconds, she had just revealed her current mood, her sleep patterns, her location, and even her evening plans.

Without knowing it, she was feeding her “invisible digital portrait” – that silhouette made up of thousands of micro-traces we scatter every day.

The problem? This portrait no longer belongs to you. It circulates, gets sold, grows richer. It can predict your desires before you even feel them. And in the wrong hands, it becomes a formidable weapon.

The real question isn’t “What are you hiding?” But “Why should you give up your privacy?”

In a world where forgetting becomes impossible, where every click shapes your future, protecting your data is no longer an individual luxury: it’s the very condition of your freedom.

OPINION

What if the rise of AI in medicine did not mark the end of doctors, but the beginning of a new era of care?

Since Hippocrates, physicians have drawn their legitimacy from knowledge. Yet, for the first time in modern history, they are no longer necessarily the ones who know the most. AI diagnoses faster, sees what the human eye cannot, and sometimes even drafts responses that patients find more reassuring than those of a professional.

So, should we fear the disappearance of doctors? Or should we rethink their place, their role, their unique value in a world where expertise is shared between human and machine?

OPINION

Last week, I talked about a point that’s often misunderstood: for AI, truth doesn’t exist.

Today, I’m taking the reasoning one step further. Because there’s an even deeper misconception: believing that an LLM is a knowledge base. It’s not. A language model generates probable word sequences, not verified facts. In other words, it recites with ease, but it never cites.

That’s exactly what I explore in my new article: why this confusion persists, and how to clearly distinguish between parametric memory and explicit memory, so we can finally combine them the right way.

OPINION

We’ve painted the word kindness in so many pastel colors that it’s become unrecognizable.
Today, it’s more often a smokescreen than a value, a cover-up to justify inaction, weakness, even cowardice.

Saying NO is now suspicious, setting boundaries is seen as toxic, demanding effort is considered violent. The result? Empty papers get applause, nothingness is celebrated as brilliance, and we dare to call it kindness.

But if protecting, loving, educating, and working together still mean anything, then it’s time to remember that real kindness doesn’t always stroke in the right direction. It protects by being clear-eyed, it builds by being demanding.

OPINION COLUMN

An AI doesn’t lie. But it doesn’t tell the truth either. It doesn’t know what is true or false — it only calculates probabilities. Its “reasoning” boils down to predicting which word is most likely to follow the previous one, based on the billions of sentences it has been trained on.

The result can be dazzling: fluid, elegant, convincing. Yet this fluency is nothing more than an illusion. What we read is not verified knowledge, but a sequence of words that “fit.” Sometimes accurate, sometimes wrong, sometimes neither — without the machine ever being aware of it.

The real danger is not the AI itself, but our very human reflex: confusing coherence with truth. In other words, mistaking the appearance of knowledge for knowledge itself. It’s this subtle, almost invisible shift that opens the door to confusion, born of ignorance about how it works, and of overconfidence in what merely “sounds right.”

OPINION