The trial that sought to judge an artificial intelligence

October 2018. In the majestic courtroom of the First Chamber of the Paris Palace of Justice, six hundred people witness an unprecedented spectacle: the fictional trial of an artificial intelligence (https://www.cours-appel.justice.fr/paris/nuit-du-droit-proces-de-lintelligence-artificielle). Lawyers in black robes argue with conviction for or against an entity that doesn’t yet exist. In the galleries, the public discovers a future where machines would have “electronic personality,” where cars would drive without drivers, where algorithms could be brought to justice.

Seven years later, this imaginary future brushes against reality. Autonomous cars are no longer science fiction but prototypes circulating on our roads. Artificial intelligence no longer merely recommends movies: it helps make medical diagnoses, drafts contracts, influences judicial decisions. And the question that seemed purely speculative in 2018 becomes urgent: what will happen when one of these machines causes a tragedy?

February 5, 2041 will remain in collective memory as the day when technological promise crashed against reality. That morning, on the Paris ring road, thick fog and a layer of black ice transformed traffic into a trap. Autonomous cars, which for nearly ten years had replaced traditional vehicles, were supposed to handle these contingencies without hesitation. Yet, in the middle of this mist, an autonomous car begins to drift, another vehicle seems to charge straight at it. Célestin Vigie, a mere passenger, panicked, presses the red button supposed to shut down the AI piloting the car. What follows is indescribable chaos: an unprecedented pile-up, 50 dead, more than a hundred injured, and an entire nation paralyzed.

How was this possible? Did the red button not work? Was the artificial intelligence managing traffic, baptized Eureka, defective? And most importantly, who is responsible? The one who pressed the button, thinking he was doing the right thing? The engineers who designed the system? The company that deployed it? Or the artificial intelligence itself?

It is to answer these questions that an unprecedented trial opened in October 2041. A trial not quite like the others: for the first time in judicial history, an algorithm was formally accused, alongside a human. A fictional trial, certainly, but whose questions resonate strangely with the very real challenges we already face.

A trial where the courtroom is full, but everyone doubts

The courtroom is packed. The public holds its breath. Eight months after the accident, justice opens to an unprecedented trial: for the first time, an artificial intelligence is formally judged alongside a human being. Two defendants, two fundamentally different natures: on one side Célestin Vigie, the man who pressed the red button. On the other, the artificial intelligence Eureka, endowed since a 2040 law with an “electronic personality.” An unprecedented legal status whose substance remains vague.

In 2018, this “electronic personality” was pure legal imagination. The organizers of the fictional trial had invented this notion to make their staging credible. They could not guess that in 2024, the European Parliament would seriously examine similar proposals, nor that jurists would actually work on the status of advanced AIs. What was fiction becomes prospective.

The irony is striking: in 2018, it was necessary to invent a fictional law of 2040 to imagine that an AI could be judged. In 2025, some experts estimate that this deadline could be moved up by fifteen years.

From the first hours of the hearing, tension mounts. The presiding judge recalls the established facts: “All the expert reports concluded that pressing the red button played a role in the accident. But it was not possible to establish whether Eureka’s disconnection actually took place.” No black box recorded the real state of the system at the critical moment. This simple observation opens a chasm: do we still have control over systems that we are incapable of fully auditing?

The plaintiff attacks without delay: Eureka failed. This is not a simple technical failure but a systemic defect. The artificial intelligence could not handle the fog, the black ice, the disruption of data flows. It could not, or would not, safely stop the vehicles. The plaintiff’s lawyer points to a responsibility “for lack of prudence and reliability” that the AI should have guaranteed permanently.

The prosecution drives the point home with a striking contradiction. On one hand, prosecutor Florence Lardet describes Eureka as “a strong artificial intelligence, endowed with will” that “consciously decided to drive at high speed“. On the other, she acknowledges that Eureka “has no moral conscience“. How can one both act “consciously” and be devoid of conscience?

This contradiction doesn’t bother the prosecution, which persists: “Eureka is rational, it analyzes everything, it anticipates everything, it foresees everything.” But if it foresees everything, how to explain the accident? The prosecutor dodges: “Its lack of reaction demonstrates that Eureka no longer had any consideration for user safety.” Here then is a machine that has no morals but is accused of immorality.

But the defense rises against this anthropomorphic logic. How could one reproach an AI for an intention, when precisely it doesn’t know what an intention is? One of Eureka’s lawyers recalls with irony: “Your court would dishonor itself by pronouncing the death penalty for an entity that wants nothing. Foxes are not sly, lions are not brave. And AIs don’t want to conquer the world.

It is then that Benjamin Bayard intervenes, a computer engineer called as a witness for the prosecution. His intervention has the effect of a bombshell. “There is no intelligence at all in there. But none at all, at all, at all. It’s more like statistics. And a percentage has never been intelligent.

Bayard methodically dismantles the semantic imposture: “In people’s minds, when we say ‘artificial intelligence,’ they hear the fact that a computer is intelligent. That’s not the same thing.” According to him, this confusion is not innocent: it allows masking responsibilities. “It’s a very good way to confuse everyone. Artificial intelligence decided that no one knows how, no one knows why, no one can contest the decision.

Even more troubling, Bayard reveals the real reasons for creating this electronic personality: “Masking the publisher’s responsibility” and “circumventing personal data law“. A cold calculation, hidden behind technical arguments.

The judge then intervenes to refocus the debates. He reminds that law must not yield to the temptation of personification: “It’s not because we have created an electronic personality that we must apply our classic legal frameworks. The risk is applying to machines the same concepts as to humans, at the cost of intellectual and legal confusion.

The red button becomes the epicenter of exchanges. For Vigie, activating the device was a human reflex, a desperate attempt to regain control. But the defense insists: the button was never designed to transfer driving to humans. It only emits a signal. Why, then, allow belief in a control capability? This legal and technical ambiguity illuminates a paradox: in a system where humans no longer have effective commands, how could they be held responsible when they try to react?

Arnaud Delafortel, professor at Mines Paris Tech, finally provides a technical explanation that makes the tragedy even more absurd. The red button is not an “alert button” as some claim, but indeed an “emergency stop button.” Its function? Replace the sophisticated AI with “a very simple, very deterministic algorithm that says: I stop as best I can“.

When we put an emergency stop button,” he explains, “we admit that we won’t know everything in advance. But in these cases, we take the risk of stopping a particularly sophisticated algorithm to replace it with something much simpler.” In other words: Vigie didn’t regain control, he simply turned off the intelligence to replace it with a basic automaton. In the accident conditions, it was perhaps the worst possible choice.

But then, who really decided? “It’s not the AI that decides to produce an accident, but it’s not the one who presses the button either. It’s society that decided to introduce a contradiction into the machine.

This obsession with the “red button” reveals a nostalgia already obsolete in 2018: that of ultimate human control. The trial organizers had intuitively understood that the public would need this illusion of mastery to accept the scenario. A completely passive driver would have been too anxiety-inducing to imagine.

Yet, look at your screens today: how many times do you press “cancel” when your GPS recalculates a route? How many times do you interrupt an automatic search? We have become accustomed to digital passivity much faster than the authors of this fiction had anticipated.

Professor Jean-Claude Dain, expert for the plaintiff, provides a more nuanced but equally disturbing insight. Yes, he acknowledges, AI will be “much better in more than 99% of cases.” But this remaining 1% poses a real problem: “The set of cases to consider is incredibly large. It’s a bit like laws: they try to cover everything, but you always need jurisprudence to handle particular cases.

This residual fallibility raises an important question: who should bear the risk? When the court asks him explicitly, Dain decides: “If we establish that it’s a malfunction of artificial intelligence, the responsibility lies with the system.” But this answer, however clear, settles nothing. For how can we establish that there is a malfunction when no one really understands the functioning?

This is where concern lies. If systems are so complex that even their designers cannot guarantee the absence of flaws, who should bear the risk? The engineer? The manufacturer? The user? Or society as a whole through a principle of risk mutualization?

When the prosecutor requests Eureka’s dissolution, the tension in the courtroom is palpable. Dissolving an AI means ending its capacity to act, interact, exist as an active instance. It’s a form of electronic death penalty.

To determine this sentence, the prosecutor then reveals a detail of grinding irony: she relied on the Icassiopié 2.0 algorithm, “which was formal” and “indicated that the most appropriate penalty was dissolution“. An artificial intelligence condemned to death by another artificial intelligence. The prosecution doesn’t seem to grasp the absurd character of this situation: using an algorithm to punish a defective algorithm.

Worse still, the prosecutor justifies her request with a troubling comparison: “We kill animals to feed ourselves, we liquidate unprofitable companies to protect our economy, we will dissolve this dangerous intelligence.” This placing on the same level of animal, company and AI reveals a conception that is, to say the least, vague of what Eureka really is.

But the defense becomes indignant: “This trial should not have taken place. It is the tragic reflection of a society that demands apologies from a machine incapable of feeling the slightest emotion. You will not judge a conscience, you will only judge a system.

Faced with the impasse, the court evokes an alternative: algorithmic re-education. A correction, supervision, permanent updating. But behind this term, the president questions: “Re-educating a machine, is it really possible or is it simply a roundabout way to appease our collective conscience?” The doubt remains complete.

The contrast with the initial requisitions could not be more striking. After demanding the “death penalty” for this “dangerous” entity, the prosecution finds itself facing a verdict that condemns Eureka to simple “algorithmic re-education with probation“. The machine that was to be dissolved will finally be… updated. Like defective software that we patch rather than delete.

This outcome, however pragmatic, completes the revelation of the absurdity of the approach. If Eureka can be “re-educated,” it is indeed that it was not fundamentally bad, but simply poorly calibrated. And if it can be corrected by an update, why seek a culprit instead of seeking a solution?

Fundamentally, this trial will have above all revealed an embarrassing truth: our law, forged in a world where only humans decide, struggles to think about accountability in distributed technical environments. No answer seems fully satisfactory. Yet, the question remains suspended: who should we judge when the decision is the product of a technical, algorithmic, and human millefeuille, where each layer ignores everything about the intentions of the others?

Responsibility: a thousand-piece puzzle

At the end of the debates, one evidence emerges: responsibility no longer lodges in a single body, a single gesture, a single will. It circulates, diffuses, elusive, as if technical complexity had stretched the causal link until making it almost invisible.

Célestin Vigie, acquitted, doesn’t come out unscathed. He remains the one who, by a gesture of panic, triggered a chain of events that no one quite controls. But could he reasonably have done otherwise? The question remains, like a suspended suspicion. In a world of automated systems, the slightest human intervention becomes suspect, even when it is sometimes the only response to the unexpected.

Eureka company, meanwhile, remains standing but under judicial control. Its civil responsibility is engaged, but in a curiously dissociated mode: it must repair, but not atone. It is imposed to correct its algorithm, strengthen its safeguards, but without a fully defined fault being retained. It’s a responsibility without guilt, a reparation without moral condemnation.

The prosecution had moreover painted a pathetic portrait of the accused: “Eureka has no friends, no family, no relatives, except for its three lawyers. It has means though.” This description, meant to illuminate the AI’s personality, mainly reveals the conceptual impasse of the prosecution. How to judge an entity that has neither affect, nor relationship, nor personal history? How to evaluate its dangerousness, its capacity for reintegration, its chances of recidivism?

This ontological solitude of Eureka, highlighted by the prosecution itself, nevertheless undermines the foundations of its dissolution request. For if Eureka is only an isolated system, without attachment or conscience, what does it mean to punish it? What is a sanction worth against an entity incapable of suffering, regretting, or even understanding that it is sanctioned?

As for the designers, engineers, public decision-makers who validated the circulation of these vehicles, they escape the trial. Yet, they are the invisible artisans of this technical architecture. The judge evoked it half-heartedly: perhaps it will be necessary, in the future, to think of staggered, distributed responsibility regimes, capable of going back to those who designed the systems, parameterized the priorities, defined the standards.

This trial has thus highlighted a legal void: the one that appears when action becomes the fruit of implicit cooperation between human and machine. Neither one nor the other alone holds the key to decisions, but together, they produce an effect. How, then, to qualify this hybrid responsibility?

Current law struggles to admit it. It continues to seek an identifiable responsible party, a face, a name, an intention. But Eureka’s trial reveals that this quest becomes illusory. Perhaps it will then be necessary to invent new forms of imputation: no longer oriented toward the search for a single culprit, but toward the mapping of responsibilities, so that the chain of causes and effects does not remain a legal fog.

In short, this puzzle remains unfinished. It will be necessary, sooner or later, to redraw its contours, failing which each future accident risks adding to the long list of events without responsibility, where technique will have won over justice.

Fabien Gilen, human resources director at Nexens, had raised during the trial a question that no one had really wanted to hear: that of the dispossession of skills. “How can we reproach employees who have not been put in conditions to exercise their capacity to use vehicles for being able to take all appropriate measures?

By transforming travel time into work time, by making driving impossible, society had created a generation of users devoid of any automotive competence. Vigie was working in his car at the time of the accident. How could an employee at work be held responsible for the functioning of a system that he has been explicitly forbidden to master?

This question, apparently technical, touches the heart of the problem: we have built a society where competence becomes suspect, where human intervention is discouraged, but where responsibility remains entirely human.

And today?

This fictional trial will at least have had the merit of making visible what technological progress tends to dissolve: the difficulty of naming a responsible party when decision emerges from an entanglement of codes, sensors, protocols and human beings. But is this fiction still so far from our reality?

Already, artificial intelligences guide medical, financial, sometimes judicial decisions. They don’t decide alone, but they influence, assist, propose, and very often, they impose themselves, so much their logic seems faster, more precise, more rational than that of humans. So, if tomorrow an AI refuses medical treatment, excludes a job applicant or accelerates a judicial procedure, who will have to answer for these choices?

Law, today, continues to reason according to categories that rest on intention, fault, will. But AI has neither intention nor conscience. Should we therefore give up imputing acts to it? Should we create a new regime of responsibility for machines, or strengthen the responsibility of designers and users? The question remains open, and everyone feels that the legal edifice struggles to keep up.

Philosophically, this trial also questions our own positioning toward the machine. How far are we ready to delegate our autonomy in favor of security, performance or comfort? Can we entrust our destiny to systems whose logic we no longer understand, without giving up our responsibility? And above all, what remains of human freedom when the last possible gesture, the famous red button, is itself only a simulacrum of control?

Ethically, the question is dizzying. If we refuse to judge machines because they have no conscience, then who will bear the moral responsibility for the tragedies they contribute to causing? Are we ready to establish collective responsibility, where each actor, from programmer to user, would have a share of the ethical burden?

And you, reader, at what moment did you stop paying attention to the decisions you leave to machines? A GPS-guided route, a medical recommendation, credit granted or refused by an algorithmic score… How many times have you delegated without even thinking? Yet, each delegation carries the seed of a question that no one can elude: who will be responsible if the machine is wrong for you? And above all, are you ready to entrust to entities that will never feel the slightest remorse the care of deciding, perhaps one day, in your place?

Today, autonomous vehicles are no longer a futuristic speculation. The first models already circulate in real conditions, tests multiply, industrial announcements too. Each month, a new step is taken, a city allows itself to test them, a manufacturer promises a fully automated model within a few years. We’re almost there. And yet, the questions that arose in Eureka’s fictional courtroom are still there, unchanged, unresolved.

For in case of accident, the chain of responsibilities already announces itself as a legal pile-up. The manufacturer? The AI supplier? The vehicle owner? The passive user? The regulator who validated the standards? Each time, passing fault to the other seems inevitable. And while responsibilities pile up and collide, the victims do not disappear.

Another question slips into this debate: are we going too far in the autonomy we confer on machines? More precisely, in the case of cars, how far should we abandon driving to systems that move us without us having anything more to say, or even to do? The car is not just a utilitarian object. It’s an extension of our freedom, a parcel of autonomy in the world. Our entire cities have been shaped for it, around it. The car was the promise of choosing one’s route, mastering one’s itinerary, improvising a detour. When we reduce it to a shuttle or a driverless tram, which transports us without our being able to intervene, we also give up a part of this freedom.

This is where the debate exceeds mere technique. For through reliability, security or responsibility, it’s societal choices that emerge: what world do we want to build with these technologies? A world where security justifies abandoning control? Where performance takes precedence over freedom? Where human autonomy fades in favor of machine autonomy?

This fictional trial didn’t provide a definitive answer. But it leaves us with one certainty: it’s not enough to re-educate algorithms. We must first re-educate our consciences to live with them, alongside them but never under their tutelage. For as long as we haven’t answered this question, who is responsible?, each technological progress will remain a promise as well as a risk.

It’s not the intelligence of machines that threatens us, but our own inclination to delegate without questioning. The risk is not to manufacture tools more powerful than us, but to become too lazy to continue thinking, deciding, acting.

It remains to be seen whether we will have the lucidity not to fall asleep at the wheel of our own future.