In praise of imperfection

Error, bias, and artificial intelligence in the mirror of our humanity

The condition of fallible being

The human being is a creature of fumbling. We are born into a world we did not choose, equipped with a mind desperately seeking to make sense of it. In this quest, our most faithful companion is not certainty, but error. It is the trace of our steps, the proof that we tried, the scar that bears witness to our learning. Being wrong is not merely a right; it’s a vital function, the very rhythm of knowledge stumbling forward. It’s the necessary draft before the work, the sketch that precedes the sculpture.

Yet in our modernity, enamored with perfection and efficiency, error has a bad reputation. It has become synonymous with failure, an anomaly to be corrected, a weakness to be eradicated. And here we are, building machines in our image, artificial intelligences, lending them the fantasy of pure objectivity, cold and infallible logic. We dream of flawless automatons, hoping they will save us from our own wanderings.

This is a double mistake. First, because fertile error and unconscious bias have nothing to do with deliberate lies and organized deception that fracture our public space today. Confusing the two means confusing the researcher who makes a mistake with the forger who cheats. Second, because AI, born from our hands and fed by our data, can only be an heir. It is a powerful mirror, sometimes distorting, of our own thought patterns, our most buried prejudices. The challenge, therefore, is not to aim for an illusory mechanical perfection, but to cultivate a deeper human consciousness. It is not the machine that must be made perfect, but the human who must become more lucid.

Our invisible heritage: what is a cognitive bias?

Before even speaking of science or artificial intelligence, we must make a detour through our own minds. For it is there, in the folds of our brain, that everything begins. If we want to understand why our technological creations can be biased, we must first admit that we ourselves are, and this, in a profound, systematic, and often unconscious manner.

Shortcuts for survival

Imagine our distant ancestors in the savanna. A dark shape moves in the tall grass. Two options: meticulously analyze the size, speed, color of the shape to determine whether it’s a lion or a simple rock stirred by the wind; or instantly presume danger and flee. Those who survived and passed on their genes to us are those who chose the second option. They used a mental shortcut: dark moving shape = potential danger.

This shortcut is the very essence of a cognitive bias. It’s not a manufacturing defect, but an optimization strategy inherited from millions of years of evolution. Facing an infinitely complex world and a constant deluge of information, our brain has developed mechanisms to think fast, decide fast, and save precious energy. Psychologists Daniel Kahneman and Amos Tversky, pioneers in this field, described these mechanisms as “judgment heuristics.” They are simple rules, “rules of thumb” that work marvelously well in most everyday situations. The problem is that they can lead us to predictable and systematic errors of judgment in more complex contexts.

A few portraits of our inner demons

A bias is neither an opinion, nor an ideology, nor a voluntary error. It’s a reflex of thought, a natural slope that our mind takes without us even realizing it. Here are a few of the most famous:

  • Confirmation bias: This is probably the most powerful of all. We have a natural tendency to seek, interpret, and remember information that confirms our pre-existing beliefs, and to ignore or discredit that which contradicts them. Imagine you believe that black cats bring bad luck. Every time something negative happens to you after crossing a black cat, your brain registers: “Aha! Another proof!” But all the times you cross a black cat without anything happening? Your brain forgets them. And all the misfortunes that occur without a black cat? They don’t count in the equation. It’s not that you’re dishonest, it’s that your brain is economizing by only retaining what confirms what it already knows.
  • Anchoring bias: The first information we receive on a subject (the “anchor”) disproportionately influences our subsequent judgments, even if this information is arbitrary. The first price displayed during a negotiation, the first number cited in a meeting, the first impression a person leaves us with… all this casts an anchor in our mind that will be difficult to lift.
  • Representativeness bias: We tend to judge a situation or a person based on stereotypes or resemblances to known cases, rather than on real statistical analysis. If we meet a shy person wearing glasses who loves reading, our brain will be tempted to conclude that this is a librarian rather than a farmer, even though there are statistically many more farmers than librarians.

These biases, and dozens of others, are not the preserve of simple minds. They affect everyone, from the ordinary citizen to the seasoned scientist, from the judge to the doctor. They are the default operating system of our brain. Understanding this is not resignation, but taking the first step to free ourselves from them. It’s lighting a lamp in the corners of our own thought to flush out the automatisms. It is this same light that we must then turn toward science, and then toward our own creations.

The noble error, engine of knowledge

If biases are the undercurrents of our thought, error is the wave that sometimes washes up on the shore of reality. It is the visible event, the failed act, the prediction that doesn’t come true. And while our first reflex is to curse it, closer observation reveals that it is one of the most powerful engines of human progress. Knowledge is not built by stacking truths, but by eliminating errors.

The obstacle as starting point

The philosopher of science Gaston Bachelard had this brilliant formulation: “knowledge of reality is a light that always casts shadows somewhere.” For him, we never begin to learn from nothing. We always begin against prior knowledge, against prejudices, images, intuitions that form an “epistemological obstacle.” The mind that approaches science is never young, it is even “very old, for it has the age of its prejudices.”

The first error, according to Bachelard, is therefore not an accident along the way, but the obligatory starting point. Alchemy was not simple foolishness, it was a necessary obstacle to the emergence of chemistry. The belief that the Sun revolves around the Earth was not stupidity, but a powerful intuition, an obstacle that centuries of observations and calculations had to laboriously overcome. Learning is therefore correcting. Thinking is saying no to one’s own initial thoughts. Error is not a void to be filled, but an “overflow” to be deconstructed. It is fertile because it forces us to think against ourselves, to break our most comfortable certainties.

Science, or the art of seeking to be wrong

This dynamic reaches its apogee with the work of Karl Popper, one of the great philosophers of science of the 20th century. For Popper, what distinguishes a scientific theory from a simple belief is not its ability to prove itself true (an impossible task, he would say), but its ability to expose itself to refutation. A theory is scientific if and only if it can be tested, challenged, potentially falsified. It’s a radical reversal: science does not advance by accumulating confirmations, but by surviving attempts at refutation.

Take the famous example of the swans. For centuries, Europeans observed only white swans. “All swans are white” seemed an evident truth. But a single observation – the discovery of a black swan in Australia – was enough to refute the theory. Popper tells us: a million white swans will never prove that all swans are white, but a single black swan proves that they are not all white. Science therefore progresses by bold conjectures followed by rigorous attempts at refutation. The researcher doesn’t seek to be right; they seek to expose themselves to being wrong. And if their theory survives, it temporarily gains legitimacy, not because it is “true” (we can never say that with certainty), but because it has not yet been refuted.

This Popperian vision is not a manifesto of skepticism or nihilism. It’s a philosophy of intellectual humility and organized audacity. It tells us that error is not the opposite of knowledge; it is its most precious raw material. Every refutation is a step forward, every failed experiment is a lesson, every theory overthrown is a victory. Darwin was wrong on certain mechanisms of evolution, Newton on the exact nature of gravity, and yet their ideas propelled science forward. Error is the path, not the obstacle.

When machines inherit our shadows: ai and its biases

Now that we’ve surveyed our inner landscape, we can turn to artificial intelligence with clearer eyes. For AI is not born from nothingness. It is trained on our data, shaped by our choices, designed by engineers who carry their own cognitive biases. The result? A technology that, far from being the cold and objective oracle we sometimes imagine, is actually a formidable amplifier of our own biases. The mirror doesn’t lie, but it can distort. And when the mirror is used to make decisions that affect human lives—hiring, justice, access to credit—the distortions can become fractures.

Concrete scandals: when algorithms go off the rails

Recent history is already full of alarming examples. In 2018, Reuters revealed that Amazon had developed a secret recruitment algorithm to automatically sort resumes. The system, trained on the company’s hiring data from the past 10 years (a period dominated by male hires in tech), learned to systematically discriminate against women. Any resume mentioning the word “women” (as in “women’s chess club captain”) or coming from women’s universities was automatically downgraded. Amazon had to scrap the project. The lesson is clear: the algorithm didn’t invent sexism. It learned it, by faithfully reproducing historical patterns.

Another striking case, documented by ProPublica in 2016, concerns the COMPAS algorithm, used in several American states to assess the risk of recidivism of defendants and guide judges’ decisions. The investigation showed that the algorithm systematically overestimated the risk for black defendants and underestimated it for white defendants, even when controlling for other factors. Here again, the machine didn’t create racism. It absorbed and automated the biases already present in the judicial and police data on which it was trained. The difference? A human judge can at least, in principle, be aware of their biases and try to correct them. An opaque algorithm makes its discriminations with the appearance of scientific objectivity.

Finally, research by Joy Buolamwini and Timnit Gebru revealed that commercial facial recognition systems had error rates up to 34% higher for dark-skinned women than for light-skinned men. The explanation? The datasets used to train these systems massively overrepresented white faces and underrepresented others. The consequence? Technology that “sees” some people better than others, literally.

The mechanisms of algorithmic bias

These failures are not accidents. They are the logical result of how machine learning systems work. Let’s explain briefly. An AI that “learns” doesn’t reason like a human. It detects statistical patterns in massive quantities of data. If the data reflects a biased society (and which society is not?), the AI will reproduce these biases. If the data is incomplete (for example, predominantly white faces in a facial recognition database), the AI will be less competent outside this framework. And if the designers of the algorithm, unconsciously influenced by their own cognitive biases, don’t test their system on diverse enough populations, the problem perpetuates.

It’s crucial to understand: the algorithm is not neutral. It’s the crystallization of human choices at every stage: the choice of data, the choice of the objective to optimize (maximize profit? minimize errors? but which errors and for whom?), the choice of performance criteria. Each of these choices embeds values, priorities, and yes, biases.

What’s particularly insidious is that AI can give the appearance of objectivity. Numbers, statistics, algorithms… all of this dresses decisions in a scientistic cloak that is hard to challenge. When a human recruiter rejects a resume, we can appeal, discuss, understand. When an algorithm does it, the decision seems implacable, neutral, as if emanating from a higher mathematical logic. It’s an illusion. Behind every algorithm, there are humans, and therefore fallibility.

Confronting our reflection: what to do?

So, what do we do with this sobering observation? Do we give up on AI, seeing it as a Pandora’s box of automated discrimination? Or do we naively embrace it, trusting that engineers will “fix” the biases like debugging code? Neither of these extremes is satisfactory. The truth, as often, is more nuanced and requires effort.

1. Technical solutions: necessary but not sufficient

The first level of response is, naturally, technical. Research in “Fairness in AI” is booming. Computer scientists are developing methods to detect and mitigate biases. For example:

  • Diversifying training data: If an algorithm performs poorly on dark-skinned faces, the solution is to train it on a more diverse dataset. This seems obvious, but it requires awareness, effort, and resources.
  • Fairness constraints: It’s possible to integrate “fairness” criteria into the very objective that the algorithm seeks to optimize. Instead of just maximizing global accuracy, we can force the model to have similar performance across different demographic groups.
  • Explainability (XAI – Explainable AI): One major problem with modern AI systems, especially deep neural networks, is their opacity. They are “black boxes”: we put data in, decisions come out, but we don’t always understand why. The explainability field seeks to make AI decisions more transparent and interpretable. If we understand why an algorithm rejects a resume or grants a loan, we can spot unfair biases.

These approaches are essential. However, they are not magic bullets. Defining what “fairness” means is already a philosophical and political problem, not a purely technical one. Should an algorithm be “blind” to protected categories (gender, ethnicity) and completely ignore them? Or should it, on the contrary, take them into account to correct historical inequalities (a form of “affirmative action” for algorithms)? These are ethical and societal questions that code alone cannot answer. Moreover, history shows that whenever we find a technical solution to a problem, new, unforeseen biases emerge.

2. Regulation: a necessary but limited safety net

Faced with the risks, legislators have begun to react. The most striking example is the EU AI Act. This regulation, pioneering at the global level, classifies AI systems according to their level of risk and imposes strict obligations (transparency, data quality, human oversight) for systems deemed “high risk” (justice, HR, health…). It’s an essential safety net to protect citizens and make companies accountable. However, regulation has its limits. It often runs behind a technology evolving at breakneck speed. It can be circumvented, and above all, it cannot foresee everything. A law cannot replace discernment. It sets a framework, but does not create a culture. It is a necessary condition, but not sufficient.

3. Education: the true lever of sovereignty

The most powerful, most lasting solution is found neither in code, nor in law, but in our minds. It’s education, in the broadest sense of the term.

  • Understanding to avoid suffering: The first urgency is to demystify AI. It’s not necessary to be an engineer to understand the main principles. As Richard Feynman might have done, we can simply explain what a language model is: it’s not a consciousness, it’s an extremely sophisticated “stochastic parrot.” Imagine a parrot that has read the entirety of the Internet and that, when you start a sentence, guesses the next most statistically probable word. If you say “The sky is…”, it will probably answer “blue” because it has seen this combination millions of times. It has no idea what the sky is, nor what blue is. It just recognizes patterns. That’s why it can “hallucinate” and invent facts with disconcerting confidence: it doesn’t know it’s lying, it’s simply generating the most probable word sequence, a master of statistical camouflage that predicts the most probable next word in a sentence. Understanding this already guards against anthropomorphism, makes us stop attributing intentions it doesn’t have.
  • Developing digital hygiene: Facing a text, image, or recommendation generated by AI, we must activate the same reflexes as the Popperian scientist: methodical doubt. Where does this information come from? Can I verify it with an independent source? What are the potential biases of this system? It’s about acquiring a critical spirit 2.0, a form of informational hygiene adapted to a world where content is no longer exclusively produced by humans. Never trust blindly, never delegate your thinking.
  • Training for the future: Finally, the battle will be won in the long term through training. It’s crucial to massively integrate ethics, social sciences, and philosophy courses into engineering and developer curricula, so they become aware of the societal impact of their creations. And in parallel, all citizens must be acculturated, from school onward, to digital issues, not just as users, but as enlightened citizens capable of understanding and questioning the tools that shape their world.

Toward an ethics of enlightened imperfection

We began this journey by celebrating error, that companion so human, so fertile. We distinguished it from its malevolent caricature, deception, which erodes trust and sabotages debate. We saw how our own biases, these mental shortcuts inherited from our past, find new life, amplified, in the circuits of artificial intelligence. The mirror that AI holds up to us is not flattering. It reflects our own contradictions, our past injustices, and our blind spots.

Facing this reflection, the temptation is great to fall into two traps: naive techno-solutionism, which believes that a few algorithmic adjustments will suffice to “repair” the machine, or technophobic rejection, which sees AI as a demonic threat to be unplugged. Both extremes miss the target. The real issue is not in the machine. It is within us.

Technical solutions are bandages, regulations are guardrails. They are useful, necessary, but they will not cure us of the root of the evil: our own lack of lucidity. The true revolution that AI imposes on us is not technological, it is humanistic. It summons us to become better humans, more aware of our own flaws, more critical of all forms of authority, including the frozen and peremptory authority of the algorithm.

The challenge, therefore, is not to build perfect machines, free from all bias – a quest as vain as that of the alchemist seeking the philosopher’s stone. The challenge is to use AI’s mirror to probe ourselves. It’s to develop an “ethics of enlightened imperfection”: accepting our fallibility as a strength, cultivating doubt as a virtue, and placing education and understanding above obedience and automation.

Artificial intelligence will be neither our savior nor our gravedigger. It will be what we decide to make of it. A tool of alienation that automates our worst tendencies, or an instrument of knowledge that, by revealing our biases, helps us overcome them. More than ever, the future is not to be written. It is to be deconstructed, corrected, learned. It is in the image of our own mind: a magnificent draft, always being rewritten.


References

Bachelard, G. (1938). The Formation of the Scientific Mind. Vrin.

Popper, K. (1934). The Logic of Scientific Discovery. Payot.

Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 77-91. http://proceedings.mlr.press/v81/buolamwini18a.html

European Commission. (2021). Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT). https://eur-lex.europa.eu/legal-content/FR/TXT/?uri=CELEX:52021PC0206

The Alan Turing Institute. Explainable AI (XAI). https://www.turing.ac.uk/research/research-projects/explainable-ai-xai