Philippe Buschini Posts

Three centuries ago, a weaver fled in a sack of wool. Today, we open the doors of our lives wide to those who want to weave in our place.

In 1733, the flying shuttle disrupted the world of craftsmanship. In 2025, it’s artificial intelligence that is reshaping our gestures, our professions, our identities.
But the real rupture might not be what we think. It’s no longer just our skills we delegate. It’s the very taste for thinking.

And this time, there may be no way back.

In 1733, we burned the machines. In 2025, we applaud them. But in both cases, it’s the human that gets sacrificed.

👉 Read on if you, too, feel that something essential is starting to melt away.

OPINION

📌 Friday mood post 📌

When everything burns, some wait for it to pass. Slowly. Very slowly.

While France suffocates under heat waves and waves of inaction, one man embodies single-handedly the art of governing without governing, of enduring without acting, of occupying without disturbing. François Bayrou, undisputed champion of strategic standstill.

I propose a deep dive into this shadow theater where silences serve as speeches, sluggishness as method, and limp hope as political line. A real-time satire of screensaver governance, where the software still runs… without anyone knowing if there’s still someone in front of the screen.

Read this if you enjoy:

– Administrative oxymorons,
– The yoga of inaction,
– And plans without a plan.

Spoiler alert: we might have confused “stability” with “immobilism,” and “leadership” with “screensaver mode.”

OPINION COLUMN

260 McDonald’s nuggets in a single order. An Air Canada chatbot lying to a grieving customer. A recruiting algorithm that blacklists everyone over 40.

Welcome to 2024, the year artificial intelligence showed its true colors. And spoiler alert: it’s not pretty.

While everyone was gushing over ChatGPT, companies were brutally discovering a harsh truth: when your machines screw up, YOU pay the price.

Gone are the golden days when you could shrug and mutter “it’s just a computer glitch.” The courts have spoken: your algorithms, your responsibility. End of story.

Europe legislates with the AI Act (180 pages of bureaucratic bliss). The US innovates at breakneck speed. China controls everything. Meanwhile, our companies are discovering that building responsible AI is like flying a fighter jet blindfolded in a thunderstorm.

The most ironic part? This silent revolution won’t just determine who pays for tomorrow’s disasters. It will decide who dominates the global economy for the next 50 years.

So, ready to discover why your next nightmare might go by the sweet name of “algorithm”? 👇

OPINION

📌 Friday mood post 📌

Yesterday, I laughed like a dying seal at the CFO’s terrible joke.

Not because it was funny. But because everyone else was laughing.

And that, folks, is competitive-level FOPO: Fear Of People’s Opinions.

That little voice in your head that makes you:

– Validate absurd projects that are “purpose-driven”
– Fake enthusiasm for kraft paper ice-breakers
– Pretend you’re passionate about design thinking

Your reptilian brain still thinks being rejected by the tribe = ending up naked in the forest negotiating territory with an existentially-confused wild boar.

Today, it’s just called “not being aligned with team dynamics.”

Dr. Michael Gervais even created a fancy acronym for your office paranoia. Because in 2025, even anxiety needs to be optimized.

But don’t worry: there’s an experiential workshop for that. Eco-friendly tote bag included.

Spoiler alert: the solution might not be in yet another personal development smoothie…

👇 Thread breaking down this new gourmet mental fatigue

OPINION COLUMN

HackAtari, or how a deceptively simple test brought the most sophisticated AIs to their knees.

They used to dominate video games. Boasted superhuman performance. And then one day, the games were made easier. The result? They collapsed.

Why? Because they never truly understood what they were doing.

And that’s where everything shifts.

In a study as brilliant as it is unsettling, Quentin Delfosse and his team expose a powerful illusion: that of systems which excel… as long as nothing changes.

They came up with HackAtari, a clever test built on simplified versions of classic Atari games. A test that, instead of making tasks harder, makes them easier — and yet it uncovers a glaring weakness. Because when you remove the obstacles, the AIs stumble. Where a human adapts and makes sense of the change, the machine falls apart.

What does HackAtari really tell us? That an AI can ace the exam… without ever understanding the questions. That it can repeat, optimize, correlate… without ever reasoning.

What if our AIs were, in truth, nothing more than top-of-the-class students — reciting without understanding?

👉 This isn’t a performance test, it’s a truth test. One that doesn’t measure what an AI does, but what it understands. And it leaves us with a quietly disturbing question: Do our AIs actually understand what they’re doing?

OPINION