From the Weaver’s Shuttle to Artificial Intelligence

There’s something fascinating about watching history repeat itself before our eyes. Not exactly identically, of course, but with those troubling echoes that give the impression of an eternal recurrence. As if time weren’t the straight line we imagine, but rather a spiral that brings us back to the same questions in new forms. While perusing the archives of the Industrial Revolution these past months, I couldn’t help but see in today’s reactions to artificial intelligence a mirror of what unfolded nearly three centuries ago around weaving looms.

Let’s take a moment to transport ourselves to 1733. John Kay, English weaver and inventor, developed the flying shuttle. The idea may seem trivial today, but it literally revolutionized the textile world. Until then, to weave wide fabrics, you needed two people: one on each side of the loom to pass the shuttle from hand to hand. With Kay’s invention, a simple mechanism allowed a single worker to make the shuttle “fly” from one end of the loom to the other with a simple gesture.

The result was spectacular. Not only did production accelerate considerably, but you could now weave pieces much wider than before. The first workshops to adopt this innovation saw their productivity soar. Some workshop owners got rich quickly, benefiting from this competitive advantage.

But very quickly, a nagging question emerged in weaving communities: if one man could do the work of two, what would become of all those artisans who would no longer be needed? This concern, first whispered in workshops, eventually spread throughout the entire profession.

The fear of replacement is a feeling as old as… innovation

Reactions weren’t long in coming. As early as September 1733, just months after the invention, the weavers of Colchester “were so worried about their livelihood that they petitioned the king to stop Kay’s inventions.” This petition addressed to King George II reveals the extent of their fears in the face of this innovation.

The weavers didn’t just fear losing their jobs, they saw this invention as an existential threat. They “accused him of wanting to take away their bread,” an expression that testifies to the violence of their concerns. Being a weaver meant mastering a know-how passed from father to son, belonging to a community, having a place in society.

Because touching a trade means touching being itself. We don’t just do work, we are through it. When this identity wavers, the whole question of meaning arises: who am I if my gestures no longer have value? What becomes of man when his skills turn into obsolescence?

The hostility was such that Kay had to leave Colchester for Leeds, then return to Bury. In 1753, “a real riot broke out, the crowd entered his house and ransacked it.” The inventor had to flee, hidden in a wool sack according to legend, before going into permanent exile in France.

Today, when a lawyer discovers that an AI can analyze contracts faster than he can, when a teacher realizes that a program can grade papers and even propose personalized exercises, when a doctor sees an algorithm making diagnoses with troubling precision, don’t they experience the same vertigo? This sensation that their expertise, patiently built over years of study and experience, could become obsolete overnight.

It’s no longer just physical strength or manual skill that’s at stake, as was the case with the flying shuttle. It’s intelligence itself, or at least what appears to be intelligence. These professionals see machines accomplishing in seconds what took them hours of reflection, analysis, cross-referencing. The trouble is all the deeper because it touches the very essence of their professional identity.

This personal vertigo, multiplied by millions of individuals, eventually crystallizes into collective anxiety. And like their weaver predecessors, today’s professionals don’t remain silent in the face of what overwhelms them. Because ultimately, yesterday’s resistance, like today’s, often expresses the same thing: a demand for regulation, a call not to let technology alone dictate our societal choices. The weavers demanded protections, time to adapt. Isn’t this exactly what those who call for slowing AI development are asking for today, giving us time to understand its implications?

These concerns are legitimate. They testify to a collective wisdom that invites us to caution. And yet, by observing our behaviors toward AI more closely, a troubling difference emerges from the story we just told.

But here’s where the parallel reveals its troubling complexity: unlike the weavers of 1733 who resisted fiercely, we adopt AI eagerly. We invite it into our offices, our homes, our pockets. This difference isn’t trivial. It reveals something more subtle and perhaps more disturbing: a form of freely consented submission.

Where our predecessors clearly saw the threat and opposed it, we mostly perceive the promise. AI seduces us with its ease, charms us with its efficiency. It doesn’t impose itself, it proposes itself. And that’s precisely what makes it so powerful.

The Hispanic-American philosopher George Santayana said: “Those who cannot remember the past are condemned to repeat it.” History teaches and informs us, but we tend to forget its most precious lessons. Yet, if we take time to observe, a pattern emerges: trades don’t disappear, they transform. Yesterday’s weavers became today’s textile machine operators. Certainly, this transformation didn’t happen without friction, there was resistance, anger, what was called “Luddism,” those workers who broke machines out of despair. But the movement was inevitable.

This lesson from history could reassure us. After all, if humanity survived the flying shuttle, steam engines, assembly lines, why would artificial intelligence be different? Why not simply trust this capacity for adaptation that has always carried us?

This is where the historical parallel reveals its limits. Because observing echoes from the past must not blind us to what is truly changing.

A revolution of a new kind

For the first time in our history, we’re not just transforming our relationship to work, we’re questioning our very way of thinking. It’s the factory of the mind that wavers, that slow alchemy of human thought where doubt, effort, time and error construct meaning. Because thinking isn’t just processing information. It’s inhabiting a question, carrying it within oneself until it becomes familiar, then strange again. It’s this intimacy with uncertainty that we risk losing. John Kay’s flying shuttle, like all the innovations that followed, mechanized the gesture, accelerated production, but preserved human intelligence. A weaver learned to use a new machine, but his judgment remained intact, his capacity for reflection preserved.

With AI, we cross an unprecedented threshold. For the first time in human history, we create tools that give the illusion of thinking. These “machines that guess the next word,” as we might describe them, don’t just assist us in our tasks. They relieve us… of thinking.

We are tired. Over-solicited. Constantly caught up in information flows, notifications, quick decisions to make. So when a machine offers a quick, clear, convincing answer, we settle for it. We breathe. We delegate. Gradually, effort retreats. Doubt becomes uncomfortable. Questioning becomes superfluous.

When calculators arrived at school, we did indeed lose the taste for mental calculation. But in exchange, we learned to reason differently, strengthened other skills: logic, abstraction, modeling. With AI, the risk is of another order. It’s no longer a calculation we delegate, it’s part of our reasoning. It’s no longer an operation we externalize, it’s a question we stop asking ourselves.

This inner shift has very real consequences. Because when our intentions are oriented, our professional choices, our aspirations, our work imaginaries are also reconfigured. Artificial intelligence doesn’t just modify our way of thinking, it profoundly redraws the contours of the working world, its hierarchies, its opportunities, its fractures.

Certainly, we’re probably witnessing the same historical phenomenon with AI. Professions we couldn’t imagine even five years ago now support an entire ecosystem of thousands of people. But beware of falling into blind optimism.

While each technological revolution generally creates more jobs than it destroys, it’s not necessarily the same people or the same places that benefit. The mechanization of textiles gave birth to mechanical engineers, specialized foremen, transporters to carry production to more distant markets.

And then, while history repeats itself in broad strokes, it never repeats exactly. The current speed of transformation is unprecedented. Where it took decades for an innovation to spread in the 18th century, today it takes only a few years, a few months, even a few weeks. And we are not prepared for this at all.

But above all, we must ask ourselves: do these new jobs preserve our capacity for critical thinking? Or do they participate, despite themselves, in this “melting of knowledge” where we end up recycling syntheses that have never been truly thought through?

Here we are facing the paradox of our era: we freely choose the tools that could alienate us. This consented submission makes resistance infinitely more complex than in the time of the Luddites.

This question of recycling isn’t trivial. Because something unprecedented is happening before our eyes: for the first time in history, our thinking assistance tools feed… on their own productions. Texts generated by AI serve to train other AIs. Synthetic images feed new models. Automated analyses become the raw material for even more automated analyses.

This loop, seemingly technical at first glance, hides a formidable trap. Because what happens when an artificial intelligence learns by looking at itself in the mirror?

The illusion of cultural veneer that reassures

A disturbing phenomenon is developing in silence: our thinking assistance tools are beginning to feed… on their own productions. Texts generated by AIs serve to train other AIs. Synthetic images feed new models. Automated syntheses become the raw material for even more automated syntheses.

This circular recycling, seemingly purely technical at first glance, hides a deeper trap: knowledge without anchor, knowledge without thought.

This phenomenon deserves our attention because it reveals something unprecedented in the history of technologies. Never have our tools had this capacity to feed on their own creations, to loop back on themselves. To understand where this leads us, we must look for examples elsewhere, and it’s from the living world that we find the most enlightening parallel.

Biology offers us a troubling parallel: autophagy. This is the process by which a cell recycles its own waste to survive. Useful, within certain limits. But when it gets out of hand, the cell devours itself. It’s no longer cleaning, it’s self-destruction.

Artificial intelligence reproduces this movement, on the scale of culture: it reformulates reformulations, recopies copies, smooths every roughness. With each cycle, it gains apparent coherence… and loses a little more substance. It’s no longer knowledge that circulates, but its spectral double. An illusion of depth. A simulacrum of thought.

This perpetual recycling poses a dizzying question: what becomes of truth in a world where only its copies circulate? We may be witnessing the birth of an unprecedented reality: that of knowledge without origin, of a culture orphaned from its sources. A world where the authentic and the artificial are no longer distinguishable, not because the artificial has become perfect, but because we have forgotten the taste of the authentic.

What we call “content” today often no longer has an identifiable source. It has been produced by machines, from other machine productions. Doubt fades. Complexity retreats. Meaning is exhausted. And with it, our relationship to the world.

This is what Bernard Stiegler already sensed when he wrote: “Proletarians are not necessarily poor people, they are people who no longer understand anything.

It’s no longer material poverty that threatens, but the loss of our capacity to understand. Not because AI thinks in our place, but because by dispensing us from it, it makes the effort to think useless. And that effort, once lost, is difficult to regain.

Staying human in a world amplified by machines

It would be tempting to consider the Luddites as blind traditionalists. This forgets that they posed legitimate questions: who benefits from these innovations? How can we ensure that technical progress serves the common good and not just a few privileged ones? These questions resonate strangely in our current debates about the concentration of power in the hands of tech giants.

Because history reminds us of a disturbing truth: all technological revolutions create winners and losers, and this distribution rarely follows the logic of merit. The first owners of textile machines got considerably rich, while many artisans sank into poverty.

We risk reliving the same scenario with AI. Those who master these tools and possess the necessary capital to deploy them could widen the gap with those who suffer them. Hence the crucial importance of the political choices we make now: training, redistribution, regulation.

What strikes me most in this historical comparison is the remarkable capacity for adaptation shown by our predecessors. In one or two generations, entire societies shifted from agriculture to industry. Millions of people left their villages, learned new trades, adopted completely different rhythms of life. Certainly, this was done painfully, with considerable human costs that we must not minimize.

But I’m not certain we have the same capacity for adaptation today. First because the pace has considerably accelerated, our ancestors had decades to adjust, we only have a few years. Then because our society has become infinitely more complex and specialized. An 18th-century peasant could relatively easily become a worker; can we imagine that a fifty-year-old accountant could as easily retrain as a “prompt engineer”? Finally, and this is perhaps the most troubling, we have lost part of that collective resilience that allowed communities to face upheavals together.

The Industrial Revolution eventually gave birth to a new social contract: welfare state, labor law, public education. We are probably on the eve of comparable transformations. But this time, the contract to redefine is not just social, it’s anthropological.

Because the real question isn’t “What can AI do?” but “How do we want to remain human in a world amplified by machines?” The real danger isn’t that AI thinks in our place, it’s that we stop thinking ourselves.

What if thinking were our last gesture of freedom in the face of this submission so sweet that we adopt it without noticing? In a world that always offers an answer, choosing the question becomes an act of resistance against our own complacency.

It’s not about fighting AI, but refusing to let it take center stage. AI can assist, reveal, simplify. But it will never know what a silence means, what a doubt contains, what a real choice costs.

So perhaps we must give back to the word “intelligence” what it has that’s most human: the slowness of discernment, the vertigo of doubt, the joy of understanding.

Not becoming the shuttle

John Kay had fled England in 1753, hidden in a wool sack, chased by the very people his invention would enrich a few decades later. Today, no AI inventor flees in a sack. On the contrary, we applaud them on conference stages, we pre-order their products, we integrate their tools into our private and professional lives with an eagerness that would have stunned the weavers of Colchester.

This difference says everything. The weavers saw the danger coming and opposed it. We invite it to dinner.

But perhaps we can learn something from these weavers, not their destructive anger, but their lucidity. They had understood that a technical innovation is never neutral, that it redistributes the cards, that it transforms not only work but the man who works.

There’s a profound irony in our situation: we who invented machines to free ourselves from constraints discover that true freedom perhaps resided in those very constraints. In the effort to understand, in the necessity to choose, in the obligation to doubt. Sartre said we are “condemned to be free.” Today, we risk being freed… from being free.

So yes, it will take courage. But not that of the hero or the resistant. The more modest courage of today’s weaver who, before his automated loom, still decides to understand how the stitch he makes works. The courage of the teacher who, despite the AI that grades in his place, continues to really read his students’ papers. The courage of the doctor who, faced with the diagnostic algorithm, still takes time to listen to what the patient says.

Because if we don’t want to end up like that flying shuttle, useful, efficient, but deprived of its own will, we must preserve what makes us irreplaceable: not our speed or our precision, but our capacity to doubt, to be wrong, to correct ourselves. To weave meaning, not just productivity.

In a world where machines learn to imitate intelligence, our last privilege might well be to remain intelligently human.