GPS corrects us before we even realize our mistake. Its synthetic voice, soft and sure, guides us back on track with the tenderness of a digital nanny. Spell-check polishes our sentences, erases our hesitations, replaces doubt with fluidity. Code assistants complete our thoughts before they fully form, guessing our intentions with unsettling precision. Everything seems simpler, faster, more efficient. These tools have become our invisible butlers, managing the details of our digital lives, smoothing out the rough edges, lubricating reality so it no longer resists. Through sheer convenience, effort disappears, replaced by the comfort of a click and the certainty of results.
But beneath this quiet perfection, a question takes root, at first faint, then insistent: what becomes of thought when it turns into a service? When reflecting, learning, creating, or deciding become transactions, priced by the task, what remains of the freedom to think for ourselves? There’s no revolt, no spectacular collapse. Just a shift, almost imperceptible: humans gradually cease being actors to become spectators of their own automatisms.
This tipping point, philosopher Bernard Stiegler saw it coming long before artificial intelligence arrived. He read it as a sign of a profound process: the proletarianization of knowledge. The word evokes nineteenth-century factories, machines and soot. But it describes our connected era with almost painful accuracy: where machines once dispossessed workers of their tools, technology today dispossesses each of us of our thinking.
For Stiegler, this proletarianization isn’t just about economics or property. It’s a more intimate dispossession, almost invisible: the loss of our capacity to know, understand, and create by ourselves [1]. It’s not measured in capital, but in consciousness. It’s no longer just the tool being taken away—it’s the hand falling asleep, the mind delegating, the gaze turning away.
And without noise, without friction, we let the machine think in our place.
From Material to Cognitive Proletarianization
Before machines thought in our place, they first worked in our place. To understand what Bernard Stiegler calls the proletarianization of knowledge, we must return to the first scene of this long slide: where humans, facing their own invention, begin to dispossess themselves of what they know.
For Karl Marx, proletarianization isn’t just an economic term. It’s a story, one of upheaval: the artisan, master of his tools and his time, becomes a proletarian, a worker in a mechanism that no longer belongs to him. Here begins the double dispossession: he loses his means of production (land, machine, workshop) and then his freedom of action. Unable to create anymore, he sells his labor power, and in doing so, sells a bit of himself [2].
What Marx calls alienation is this progressive estrangement of man from his own gesture. The worker no longer recognizes his work, or even the meaning of what he does. His activity is fragmented, measured, repeated, dictated by the rhythm of machines and factory logic. Where work should be self-expression, it becomes commodity.
A century later, Harry Braverman would describe in Labor and Monopoly Capital the continuation of this dispossession: Taylorism and Fordism, by breaking down work into mechanical sequences, tore know-how from the worker’s body to transfer it to engineering office blueprints [3]. What the hand knew without saying becomes a diagram, a chart, a stopwatch. The gesture empties itself of thought, intelligence freezes in the machine.
But Stiegler would go further. What he saw was that this loss didn’t stop at factory gates: it seeped everywhere. Proletarianization is no longer just material, it becomes cognitive. It no longer touches just the producer, but the whole person.
Now, it’s not just the hand losing its knowledge, but memory, speech, thought itself. We’ve learned to delegate our gestures, then our ideas. And in this quiet delegation, another alienation settles in, softer, more invisible, but infinitely deeper.
When Knowledge Itself Is Expropriated
Bernard Stiegler takes up Marx’s legacy, but tilts it into another dimension. Because dispossession doesn’t stop at the factory: it spreads, silently, to society as a whole. What machines did to hands, screens now do to minds.
In the twentieth century, with radio, television, then digital technologies, a new type of alienation takes hold. Proletarianization spares no one: it affects consumers as much as producers. And it no longer attacks just gestures, but knowledge itself.
Stiegler speaks of generalized proletarianization [1]. To grasp the scope of this process, we must first understand the mechanism that makes it possible.
At the heart of this dispossession, Stiegler places a strange, almost technical word: grammatization. It’s the moment when a living flow (speech, gesture, thought) transforms into sequence, code, reproducible unit. Writing, for example, is a grammatization of speech: it breaks down voice into letters, makes it reproducible, transmissible. The assembly line is a grammatization of gesture. And today, algorithms grammatize our behaviors: our clicks, tastes, desires become data, decomposed, recomposed, predictable.
Behind this word hides a dizzying idea: humans have always built themselves by externalizing their knowledge. Each tool, each technical medium (flint, book, hard drive) is a tertiary retention, to use Stiegler’s term borrowed from Leroi-Gourhan [4]. These are our artificial memories, material extensions of our minds.
But when these media no longer belong to us, when they’re controlled by industries that capture our attention to resell it, the process reverses. What should individuate us dis-individuates us. What should connect us standardizes us. What should amplify our thinking replaces it.
We’re then caught in a strange mechanism: tools that should extend our intelligence become instruments of collective amnesia. Where once books liberated speech, algorithms imprison it in prediction. And in this shift, imperceptible but decisive, knowledge ceases to be a conquest—it becomes a byproduct of our own traces.
The Three Waves of Proletarianization
First there was the hand. Then life. Then thought. Three waves, three fractures. Each time, a piece of our knowledge detached from us, froze in the machine, like a skin we’d left behind. Bernard Stiegler saw in this slow dispossession the hidden thread of our modernity: as technology progresses, something in us falls asleep.
- The producer and the loss of know-how. The first wave, Marx had already sensed it coming, like a rumble from deep within factories. It’s the proletarianization of know-how. The artisan, who instinctively knew where to strike, when to adjust, how to feel the material, sees his gesture decomposed, timed, rationalized. Taylorism transforms him into a cog, Fordism into an appendage of the chain. What the hand knew through the body becomes procedure, protocol, script. The worker no longer understands the gesture he performs. He executes, he no longer knows why. Marx spoke of the “servant of the machine.” We could say today: a human plugged into a system that thinks for him. It’s the loss of craft as the world’s language, the disappearance of that practical intelligence connecting man to matter.
- The consumer and the standardization of know-how-to-live. Then came the second wave, more silent, more insidious. One that strikes not the hand, but the mind. In the mid-twentieth century, machines no longer just manufacture objects: they manufacture ways of life. Cultural industries invent a new factory, without walls or smoke: that of attention. Advertising no longer sells products, but desires. Marketing no longer addresses our reason, but our imagination. And gradually, what we once called savoir-vivre (that art of conversing, educating, transmitting) dissolves in a background noise of images and slogans. Television programs align millions of minds to the same rhythms, same narratives, same emotions. Stiegler speaks here of symbolic misery [5]: an impoverishment of meaning, an atrophy of our capacity to produce our own symbols. We no longer form community around a culture, but around a flow. Savoir-vivre, which should connect us, reduces to a consumption script. And in this standardized universe, the individual stops inventing himself.
- The conceptualizer and the atrophy of know-how-to-theorize. The third wave is the most formidable. It doesn’t rumble: it whispers. It seeps into the silence of our keyboards, into the polished interfaces of our screens. It’s the proletarianization of know-how-to-theorize, one that touches the very faculty of thinking. Search engines find for us. Decision support systems decide for us. Artificial intelligences write, synthesize, recommend, predict. What we called judgment becomes statistics. The time saved is immense, but at what cost? By delegating to the machine the task of reasoning, we lose the habit of doubt. Critical thinking dulls. Studies confirm it: many executives, after an error, prefer to entrust the next decision to an AI rather than to themselves [6]. We shift from active participation to passive monitoring, from the role of conceptualizer to that of validator. It’s the proletarianization of thought. Knowledge is no longer an inner effort, but an external service. We no longer create our ideas: we consume them.
Why We Always Choose the Shortest Path
Daniel Kahneman, Nobel laureate in psychology, gave us the key to this mutation. According to him, our brains operate at two speeds: System 1, fast and intuitive, and System 2, slow and reasoned [13]. And like any living organism, we always choose the shortest path. Digital technologies speak to System 1: they flatter our need for immediacy. They offer us satisfaction without effort, certainty without verification. Result: we accumulate a cognitive debt. We know more things, but we think about them less.
The Google effect had already revealed it: we remember less well what we know we can find again [15]. Today, studies show that 83% of AI users are unable to remember the text they had generated [16]. Memory dissolves in the machine.
Comfort becomes dependence. And laziness, far from being a defect, becomes the norm.
Surveillance Capitalism and the Industrialization of Thought
The third wave of proletarianization, that of know-how-to-theorize, reaches its peak in an economic model that has managed to transform thought itself into raw material: surveillance capitalism. This capitalism no longer just sells objects or services, it industrializes our inner lives. Our emotions, doubts, impulses become data. What we are, what we love, what we seek—everything is recorded, calculated, valued.
Attention, New Frontier of Capital
As early as the 1990s, thinkers like Maurizio Lazzarato and Franco “Bifo” Berardi had perceived this shift. They already spoke of immaterial labor: work where productivity depends on our ability to communicate, create, stay attentive [7, 8]. Thus was born the cognitariat, this new class of knowledge workers, condemned to hyper-connection, drowning under information flows, always reachable, never truly present.
But what Shoshana Zuboff brought to light is the true heart of the system: attention is no longer just a scarce good, it’s a territory to colonize. Platforms don’t just capture our gaze; they reshape our behaviors to make them profitable. They transform our curiosity into resource, our distraction into profit.
In The Age of Surveillance Capitalism, Zuboff describes a capitalism of a new kind: a system that no longer feeds on human labor, but on human experience [9]. Digital giants (Google, Facebook, Amazon) capture each of our traces: clicks, searches, likes, movements, even silences. These fragments of our lives become behavioral data.
This data is then digested by artificial intelligences, capable of predicting what we’ll do tomorrow: what we’ll buy, where we’ll go, what we’ll perhaps think. These predictions, turned into commodities, are resold to other companies, on a gigantic market of the future.
But this economy has a price: to make our behaviors predictable, they must first be standardized. Algorithms don’t seek to open us to the world, but to lock us into what we already are. Each click refines our statistical portrait, each recommendation tightens our horizon. We think we’re choosing, but we’re being guided.
It’s the symbolic misery described by Stiegler: a poverty of meaning, where the symbols we consume no longer belong to us. We no longer participate in creating the world, we passively absorb it through the screen.
This mechanism of capture-prediction-standardization has become the dominant model of our era. It no longer just exploits our labor: it exploits our very existence. And it’s on this infrastructure that generative artificial intelligence now grafts itself, crossing an additional threshold.
When Thinking Becomes a Paid Service
With the arrival of generative artificial intelligences, this system crosses a new stage. We no longer just observe and are predicted: we actively delegate our minds. Writing, analyzing, creating, imagining: so many activities that, until recently, engaged our judgment, and which can now be purchased as a service.
Thinking becomes a subscription. Creativity, a premium option.
As Jaron Lanier pointed out, we feed these systems with our own knowledge without receiving the slightest compensation [10]. Our texts, images, conversations feed models that, in return, reproduce our ideas in product form. We contribute, unknowingly, to our own erasure.
Ultimately, this mechanism could draw a two-speed society: on one side, an elite capable of understanding and conceiving; on the other, a majority reduced to consuming thought services. What was a right (thinking, creating, judging) would become a privilege.
The symptoms are already there. Studies show that dependence on GPS atrophies brain areas related to orientation. Spell-check relieves us of lexical doubt. Code assistants complete our thoughts before they’re fully formed. These tools, which opened this article as benevolent butlers, now reveal their double face: they relieve us, but they amputate us. Others observe that intensive use of writing assistants reduces our ability to structure an idea, argue, nuance [6]. We no longer think our tools: we think with them, then through them, until they think for us.
Industrial capitalism had domesticated the hand. Cognitive capitalism now domesticates the mind. And in this new immaterial factory, it’s our thoughts that parade on the assembly line.
Three Scenes of Contemporary Proletarianization
To understand the scale of the phenomenon, we must come back down to earth, where the proletarianization of knowledge is no longer read in concepts but in everyday gestures. It’s seen in how we read, learn, heal.
Three scenes from our ordinary lives, three laboratories of assisted humanity, where technology reveals all its ambiguity: remedy and poison at once.
- The book, from attentive reader to captive reader. Once, reading was a slow crossing. You entered a book like entering a forest: with time, silence, a bit of inner disorder. You underlined, annotated, went back. The reader participated in the work, made it his own. Reading, Proust said, wasn’t consumption, but conversation.Then screens took over. Our readings became flows. The reader transformed into a profile: his tastes analyzed, his next readings predicted, his attention horizon gradually narrowed by recommendations confirming what he already knows. It’s the mechanism of surveillance capitalism applied to the very act of reading. Gone is the surprise of the randomly found book, discovering an unknown author, fertile wandering between library shelves. The reader loses control of his intellectual horizon.But digital isn’t condemned to be poison. Open libraries, knowledge commons, contributory reading journals reinvent reading as a collective act. They transform technology into an instrument of shared knowledge. Wikipedia, for example, remains a rare success: a common memory that steals nothing from us, but connects us. Everything depends on how we inhabit it.
- Education, the student as knowledge producer. In the industrial school of the twentieth century, students received vertical knowledge, fixed, ready to consume. Teachers held knowledge, students absorbed it. This one-way transmission formed full memories, but passive minds.Connected school risks the opposite excess. Under the pretext of modernity, we entrust learning to platforms, tutorial AIs, automated quizzes. Notes standardize, knowledge repeats. Students become information consumers, minds under assistance. It’s a new form of proletarianization: that of know-how-to-live in debate, know-how of research, know-how-to-theorize thought. Learning to learn, doubt, reformulate: all this disappears in the comfort of an immediate answer.Stiegler proposed another path: making school a contributory space. In Plaine Commune, in Seine-Saint-Denis, he had launched the Learning Contributory Territories project. The idea was simple and revolutionary: that students, instead of receiving knowledge, build it, document it, transmit it in turn. That digital be not a consumption tool, but a lever for collective individuation. There, technology changes nature. It stops being poison to become remedy.
- Medicine: from clinical knowledge to automated protocol. Medicine, too, is going through its knowledge crisis. Yesterday’s doctor was both scientist and observer, technician and humanist. His knowledge was made of theory, but also experience, doubt, that intuition forged in contact with bodies and lives.Today, this practice is threatened by subtle automation. Protocols replace reflections, diagnostic support software becomes prescriptive, algorithms “think” in place of practitioners. Artificial intelligence can detect a tumor better than a seasoned radiologist. But by delegating the gaze, we end up losing our hand.The danger isn’t the machine, but the de-individuation of care. When medical decision reduces to an automated recommendation, the clinical gesture loses its humanity. Yet healing isn’t just applying protocol, it’s interpreting silence, understanding a look, sensing fear.But here again, technology can change face. If it becomes a support for individuation (a tool for dialogue, shared memory, collective reasoning), it can strengthen knowledge rather than dissolve it. Medical AI, conceived as a critical ally, could become an extension of human intelligence rather than a substitute. The question isn’t whether AI will replace the doctor, but what kind of doctor we want to become in the age of AI. One who follows the machine, or one who uses it to better understand the human.
Pharmacology or How to Regain Control
Bernard Stiegler wasn’t a thinker of disaster, but a thinker of care. Facing the generalized proletarianization he described, he preached neither rejection of technology, nor return to an idealized past. He invited us to lucidity.
Because technology, according to him, is never simply good or bad. It’s what the ancient Greeks called a pharmakon: both remedy and poison, depending on the hand that holds it and the use made of it.
This notion, Stiegler borrows from Plato, who recounted in the Phaedrus how writing was received with mistrust: some saw in it a remedy against forgetting, others a poison that would kill living memory. Both were right. Writing can stupefy those who only copy, or emancipate those who use it to think further.
The pharmakon is this constitutive ambivalence of all technology. The book, for example, could standardize thought through print standardization, but it also enabled intellectual emancipation, knowledge circulation, the birth of criticism. The Internet can trap in filter bubbles, or open to collective intelligence. AI can proletarianize the mind, or become a tool for de-proletarianization.
Everything depends on context, uses, policies that frame it. In other words: the question isn’t technology itself, but the political economy, culture, and social practices in which it inscribes itself.
Stiegler then proposes a bold answer: cure evil with evil. Make our digital tools, not instruments of disqualification, but means of de-proletarianization. It’s the project of a positive pharmacology: learning to transform technologies of capture into technologies of contribution.
The Economy of Contribution
For Stiegler, the first step is economic. We must exit the sterile alternative between wage labor, often alienating, and inactivity, often exclusionary. Between these two poles, he imagines an economy of contribution: a form of work that creates knowledge, connection, meaning.
In this model, value no longer resides in immediate profitability, but in the production of shared knowledge. Rather than a universal basic income that risks maintaining consumer passivity, Stiegler defends the idea of a contributory income: remuneration conditional on participation in collective projects, knowledge creation, transmission [11].
He tried to concretize this vision in Plaine Commune, in Seine-Saint-Denis, through the Learning Contributory Territories. There, researchers, residents, businesses, and communities collaborated to reinvent local life: ecological transition, education, health, culture. The goal wasn’t to innovate for innovation’s sake, but to recreate the knowledge chain: from doing to living, from living to thinking. Though unfinished, this project remains a model: that of an economy based on participation rather than consumption.
Taking Care of Attention Again
But de-proletarianization isn’t played out only at the economic level. It begins in the mind. In a world saturated with notifications, where every minute of attention is commercial property, we must relearn to think slowly.
Digital platforms train us toward permanent dispersion. They fragment time, dissolve concentration, replace reading with scrolling. Resisting here is an act of care. Regaining control of one’s attention is already reconquering part of one’s freedom.
Stiegler spoke, following Michel Foucault, of techniques of self and technologies of the mind: practices and tools that help us govern ourselves. This involves attention education from a young age, but also designing digital tools not based on capture.
Initiatives already exist: free software, digital commons like Wikipedia or OpenStreetMap, decentralized networks like Mastodon or the Fediverse. In these spaces, value isn’t extracted from users, it’s co-created with them. These are oases in the desert of the attention economy.
Taking care of one’s mind means refusing organized distraction. It’s reconstructing, facing speed, an ecology of slowness.
Toward a Politics of Technologies
But no reconquest can be purely individual. The struggle against the proletarianization of knowledge is above all political. As Andrew Feenberg reminds us, technology is never neutral: it’s the expression of social and economic choices [12].
Regaining control thus means weighing on the very design of the tools we use. Demanding algorithm transparency, defending collective data ownership, promoting interoperability and open standards. Refusing the “walled gardens” of digital giants, where knowledge transforms into rent.
But the political response can’t be limited to regulation or taxation. It must encourage the creation of alternatives: cooperative platforms, contributory laboratories, technological commons. Spaces where innovation isn’t dictated by market logic, but by that of meaning.
Because ultimately, technology isn’t destiny: it’s a battlefield. And true sovereignty doesn’t reside in possessing machines, but in the capacity to decide what they make of us.
Thus, Stiegler’s pharmacology is neither nostalgia nor utopia. It’s a strategy of the living. It reminds us that humans aren’t condemned to suffer their inventions, but that they can inhabit them, think them, reorient them. Curing evil with evil means restoring technology’s original vocation: not to replace man, but to extend his thought.
The Choice of Prometheus
The history of proletarianization, from the worker’s hand to the citizen’s mind, is one of slow displacement: the loss of gesture, then of judgment. Every era has known its share of fire. And each time, humans have found themselves facing the same question: what to do with what they invent?
Prometheus didn’t just steal fire from the gods, he gave humanity the responsibility to use it. That’s where we are today. We hold in our hands a new fire (that of artificial intelligence) whose heat we don’t yet fully measure.
Bernard Stiegler saw in cognitive proletarianization the ultimate risk: that of delegating not just our gestures, but our minds. When thought becomes a service, when decision reduces to validating a calculation, it’s our individuation that falters. This process by which each becomes a singular being, within a living collective, threatens to stop. And with it, the very capacity of a society to think its destiny.
Because thinking together is the heart of all democracy. If we lose this faculty, if we let algorithms organize our opinions, predict our desires, adjust our emotions, we renounce the very possibility of a common future. We cease being a people of citizens to become a market of behaviors.
But nothing is inevitable. Proletarianization isn’t fatality, it’s a historical process, therefore reversible. The paths that Stiegler opened (the economy of contribution, care of attention, politics of technologies) aren’t recipes, but open experiments. They sketch a path: that of a technology reoriented toward cooperation, slowness, shared thought.
The challenge isn’t knowing whether we should use artificial intelligence. It’s deciding how we’re going to live with it. Will we make it the instrument of new proletarianization, or the lever of collective de-proletarianization, where everyone regains the right to understand, create, judge?
This choice doesn’t belong to machines. It depends on the care we give to our own thought, the time we grant to discernment, the courage we have to say no to ease.
Perhaps that’s what Prometheus’s true fire is today: no longer the stolen flame, but the preserved light. The one we must maintain, patiently, against the speed of the world. The one that reminds us that thinking isn’t a service, but an act.
References
For meticulous minds, lovers of numbers and late nights verifying sources, here are the links that nourished this article. They recall one simple thing: information still exists, as long as we take the time to read it, compare it, and understand it. But in the near future, this simple gesture may become a luxury, because as texts entirely generated by AI multiply, the real risk is no longer disinformation, but the dilution of reality in an ocean of merely plausible content.
[1] Stiegler, B. (2012). États de choc : Bêtise et savoir au XXIe siècle. Mille et une nuits.
[2] Marx, K. (1867). Capital, Volume I.
[3] Braverman, H. (1974). Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. Monthly Review Press.
[4] Leroi-Gourhan, A. (1964-1965). Gesture and Speech. Albin Michel.
[5] Stiegler, B. (2004). De la misère symbolique : Tome 1. L’époque hyperindustrielle. Galilée.
[6] Desveaud, K. (2025, May 5). L’IA au travail : un gain de confort qui pourrait vous coûter cher. The Conversation. https://theconversation.com/lia-au-travail-un-gain-de-confort-qui-pourrait-vous-couter-cher-253811
[7] Lazzarato, M. (1996). Le « travail immatériel ». Futur Antérieur, (35-36), 99-107.
[8] Berardi, F. (Bifo). (2011). The Soul at Work: From Alienation to Autonomy. Semiotext(e).
[9] Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
[10] Lanier, J. (2013). Who Owns the Future?. Simon & Schuster.
[11] Stiegler, B. (2016). Le revenu contributif et le revenu universel. Multitudes, 63(2), 51-57.
[12] Feenberg, A. (1999). Questioning Technology. Routledge.
[13] Kahneman, D. (2012). Thinking, Fast and Slow. Flammarion.
[14] Frederick, S. (2005). Cognitive Reflection and Decision Making. Journal of Economic Perspectives, 19(4), 25-42.
[15] Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776-778.
[16] Roxin, I. (2025, July 3). IA générative : le risque de l’atrophie cognitive. Polytechnique Insights. https://www.polytechnique-insights.com/tribunes/neurosciences/ia-generative-le-risque-de-latrophie-cognitive/
