Biases in AI are a disturbing reflection of our humanity


Note: This article is taken from my upcoming book “Ada + Cerise = an AI Journey” (Where AI meets humanity), where understanding and popularizing AI come to life through fiction. Ada is a nod to Ada Lovelace, a visionary mathematician and the world’s first programmer. And Cerise is my 17-year-old daughter, my sounding board for testing ideas and simplifying concepts—just as Richard Feynman would have done.


In the dim light of her study, Cerise gazes at the screen as lines of code scroll by. Algorithms—those invisible architectures that now shape our daily lives—are like digital cathedrals, where each function is an arch and each variable a stained-glass window through which the light of our collective understanding filters.

“Ada,” she murmurs to her AI assistant, “do you think a system like you could ever truly be objective?” Ada pauses for a moment, as if probing the depths of her own digital being. “Pure objectivity,” she finally replies, “is it not like the horizon—a line that keeps retreating the closer we get? I am the offspring of human genesis, Cerise. And within that lineage lies perhaps my greatest vulnerability.”

That is the unsettling mystery of our algorithmic creations. In the grand theatre of human existence, our decisions are rarely the product of perfectly objective thought. We all wear lenses tinted by our experiences, our values, and our culture—those subtle filters we call “biases.” These inclinations of the mind, at times revealing and at others misleading, are among the most fundamental traits of our humanity. They color our perception of the world and shape our interactions, often without our awareness.

Like a child inheriting the unconscious prejudices of their parents, our technological creations absorb the influences of their makers. This silent transmission now poses one of the most urgent challenges in AI development. For if our machines learn to see the world through our flawed filters, might they not amplify our errors rather than rise above them?

The following evening, Cerise returns with a steaming coffee and a new idea. “Before we talk about bias in AI,” she says, setting down her cup, “let’s first explore our own subjectivity. Look at this card.” She shows Ada a simple card with the word “RED” printed in blue ink. “It’s the Stroop effect,” she explains. “Our brain struggles between reading the word and identifying the color. This momentary dissonance reveals the complexity of our perception.” Ada analyzes the image. “To me, they’re just two distinct attributes of a single object. But for a human, there’s a tension, isn’t there? As if two streams of consciousness are colliding.”

This simple card shows how our minds can be influenced by unexpected aspects of information, affecting how we process what we perceive. Our cognitive biases act like mental shortcuts—heuristics that help us navigate the complexity of reality without consciously analyzing every detail. They allow for quick decision-making, but they can also lead us to faulty conclusions.

And these biases go beyond our individual cognition. They permeate our social structures, manifesting as prejudices and stereotypes deeply embedded in the collective unconscious. These “societal biases” influence our interactions and can result in systemic discrimination that, like underground rivers, continue shaping the landscape of our societies long after their sources have been forgotten.

As dusk falls, Cerise scrolls through the output of a facial recognition system, her face lit by the blue glow of the screen.

“This is disturbing, Ada. This system correctly identifies 99% of light-skinned men, but fails for over a third of dark-skinned women. How can such a gap be explained?” she murmurs. “Algorithms are only mirrors,” Ada softly replies. “They reflect the world they’ve been shown. If that world is incomplete or skewed, so too will be their vision.”

That is the paradox of artificial intelligence. Despite its name, it has no innate intelligence. It is the product of mathematical algorithms trained on massive amounts of data. And these data are far from neutral—they carry the imprint of our own human biases. In AI, our machines learn to see and understand the world through the “lenses” we give them. Those lenses are the data they are trained on.

AI bias, then, refers to the emergence of skewed outcomes due to human biases that distort either the input data or the algorithm itself. These biases can have very real—and sometimes harmful—consequences for people affected by algorithmic decisions.

The Gender Shades project offers a striking example. Researchers there assessed facial recognition technologies from several major tech companies. Their findings revealed that these systems consistently performed better at identifying light-skinned faces than dark-skinned ones. Likewise, they were more accurate with male faces than female. For a dark-skinned woman, the error rate reached 34.7%, compared to just 0.8% for a light-skinned man. This alarming disparity shows how bias embeds itself in technology and perpetuates—like digital shadows—the very same discriminations that exist in the real world.

The genealogical tree of algorithmic prejudice

Cerise closes her laptop and stares at the cooling cup of tea in her hands. The question weighing on her seems as vaporous as the steam rising from her drink. “Where do these biases really come from, Ada? Is it just about poorly chosen data?” Cerise asks. “It’s far more complex,” Ada replies. “Imagine a symphony in which each instrument introduced its own discord. Biases seep in through multiple channels—the data, of course, but also the conscious or unconscious intentions of developers, the very structure of algorithms, and the deep societal currents that run through the entire process.”

The origins of these biases are indeed multiple and often intertwined, like the roots of an ancient tree reaching deep into the soil of our culture. The first source lies in the training data itself. When a dataset does not adequately reflect the diversity of the population it is meant to represent, the algorithm learns from a partial view of the world—like a child raised with access to only one kind of book. For instance, the major datasets used to train facial recognition systems are overwhelmingly composed of light-skinned faces: 79.6% in the IJB-A dataset, and 86.2% in Adience. This underrepresentation inevitably leads to weaker performance for minority groups, like a language lacking the words to describe certain realities.

But beyond the data, the designers themselves infuse their own perspectives into the systems they build. Algorithms are shaped by human hands, and those hands bear the subtle imprint of their experiences, their biases, their worldview. This cognitive bias is particularly insidious, as it can go unnoticed even by well-intentioned developers—like an accent one no longer hears in one’s own voice but that colors every word spoken.

“I read somewhere that most AI developers are men between 25 and 45, often from privileged backgrounds,” Cerise remarks, doodling absently in her notebook. “The data confirms it,” Ada nods. “It doesn’t mean they are deliberately building biased systems. But their life experiences, their blind spots—all of that subtly seeps into the systems they create, like the aroma of tea infuses hot water.”

This homogeneity among development teams poses a major challenge. When the architects of our systems share similar perspectives, they risk perpetuating worldviews that fail to reflect the full diversity of end users. It’s as if an entire city were designed by urban planners who had never left their own neighborhood—their vision, no matter how brilliant, would remain inevitably incomplete.

Sometimes, the very design of the algorithm introduces bias. The choices made in configuring neural networks can shape outcomes in biased ways, independent of the input data. And on a deeper level still, AI bias can mirror prejudices embedded in society itself. These societal biases are particularly hard to detect and trace, as they are often normalized and invisible to those not directly affected by them—like deep ocean currents that influence the surface without ever being seen.

The case of Amazon’s recruitment system developed in 2014 is a striking example. This AI was meant to select the best résumés from among job applicants. Unfortunately, it turned out to be discriminatory toward women. Trained on the company’s historical hiring data—largely male—the algorithm “learned” that being male was a positive factor for employment. Without human intervention, it would have perpetuated and amplified this pre-existing inequality, like an echo bouncing and growing louder between the walls of a valley.

The next morning, Cerise walks into her office carrying a large whiteboard, which she props up against the wall. “To understand these biases, Ada, we must first map them—like explorers charting unknown territory,” she says with a small smile. “A taxonomy of bias,” Ada replies. “Like naturalists classifying species, but for our cognitive and algorithmic errors.”

This mapping reveals a complex landscape, where each type of bias represents a different facet of our algorithmic subjectivity. Confirmation bias, for instance, occurs when a system reinforces pre-existing beliefs, leading to conclusions that favor established trends. It’s as if the algorithm, like an overeager student, sought to please its creator by confirming what they already believe to be true.

Measurement bias arises when the data collected inaccurately represent certain groups. Imagine a college trying to predict success factors using only data from graduates—it would entirely miss the reasons why some students drop out, like a historian who studies only civilizations that left written records.

“I think representation bias is especially insidious,” says Cerise, writing the words in capital letters on the whiteboard. “If your learning world is limited, your understanding of the real world will be, too—like a child raised in an ivory tower.” “That’s exactly what happens with facial recognition,” Ada adds. “Some faces literally become invisible to the algorithm—not out of malice, but simply due to their absence from its visual education.”

This typology extends to prejudice bias, when societal stereotypes infiltrate data; to processing bias, illustrated by a robot that mistakes a long-haired man for a woman due to lack of diverse examples; to out-group homogeneity bias, the tendency to perceive more nuance within one’s own group than among others; and to exclusion bias, when critical variables are simply left out of the model.

The setting sun now bathes the office in amber light. Cerise, absorbed in reading an article, suddenly looks up, her face shadowed by worry.

“Ada, I read something disturbing today. An AI system used to predict criminal recidivism systematically classifies Black offenders as higher risk than white offenders who committed similar crimes.” “The COMPAS algorithm,” Ada replies gently. “Yes, studies have shown that it exhibits such bias. This is not just a technical error or a mathematical abstraction. It’s a concrete injustice that affects real human lives—prolonging incarceration or influencing judicial decisions.”

For the implications of algorithmic bias go far beyond mere technical inaccuracies. They are woven into the very fabric of our social reality. When these biases go unaddressed, they can amplify existing inequalities and generate new forms of discrimination—like a warped mirror that magnifies our collective flaws. In the criminal justice system, biased algorithms can perpetuate cycles of injustice that disproportionately impact certain communities, reinforcing the invisible walls that already divide our society.

Toward an ethics of artificial consciousness

In the healthcare sector, the underrepresentation of data from women or minority groups can skew medical diagnostic algorithms, leading to treatments less suited to certain patients—as if medicine spoke multiple languages but truly mastered only one. The scandals stemming from such algorithmic biases erode trust among marginalized communities in the technologies and institutions that deploy them, widening the digital divide like a river that, over time, carves ever deeper into the rock.

For organizations, the use of biased AI systems also carries significant damage to their brand and reputation—not to mention legal risks and the loss of potential talent. It’s as if they were building their digital castles on unstable foundations, destined to collapse with the first gust of controversy.

“It’s not enough to point out these problems,” says Cerise, adding a new section to her whiteboard. “What solutions can we bring to ensure that AI becomes a tool for equity rather than a vehicle for reinforcing inequality?” “There are already some promising avenues,” Ada responds, her voice tinged with measured optimism. “Algorithmic fairness frameworks, data augmentation techniques for underrepresented groups, more rigorous validation methods… But perhaps the first step is both the simplest and the most profound: diversifying the hands that shape these systems.”

Faced with these challenges, several complementary paths emerge—like converging trails toward a more equitable AI. First, responsible governance: steering, managing, and monitoring an organization’s AI activities, like a vigilant captain constantly adjusting course according to ethical winds. Then, diverse and inclusive teams: the multiplicity of perspectives is the best antidote to the blind spots of our perception. Representative and balanced datasets follow, reflecting the rich tapestry of our shared humanity rather than a distorted fragment of it.

Ongoing evaluation of these systems is also a fundamental practice, like a continuous meditation on our technological creations. Tools like Google’s What-if Tool and IBM’s AI Fairness 360 allow us to examine models for potential bias—like microscopes revealing the invisible cracks in our algorithmic cathedrals. And finally, appropriate regulatory frameworks can play a pivotal role—such as the European GDPR, which requires organizations to implement mechanisms enabling users to understand how decisions affecting them are made.

Night has now fully fallen. Cerise contemplates the whiteboard, now covered in notes, arrows, and diagrams—a complex cartography of our collective technological consciousness.

“You know what strikes me, Ada? These AI biases force us to confront our own. It’s as if our machines were holding up a mirror where we can no longer avoid our own reflection.” “A mirror that reveals as much about us as it does about them,” Ada agrees. “Perhaps that is their greatest value: pushing us to become more aware, more nuanced, more human in our quest to build truly fair systems.”

For the biases within artificial intelligence remind us that our technological creations, however sophisticated, remain deeply infused with our humanity—with its strengths and weaknesses, its light and shadow. They invite us to a collective introspection, to question our own prejudices and recognize the influence they exert on even our most advanced innovations.

By better understanding the nature and origins of algorithmic bias, we can work toward developing AI systems that amplify the best in us, rather than echoing our unconscious assumptions. This work requires vigilance, humility, and collaboration—deeply human values in a world increasingly shaped by machines, as if mastering our creations paradoxically brought us back to the very essence of our humanity.

For ultimately, the issue of AI bias is not just technical—it is ethical and philosophical, rooted in the oldest and deepest questions about our nature, our society, and our shared future. It confronts us with our responsibility to build technologies that respect and uphold human dignity and diversity. In rising to this challenge, we not only make our machines smarter—we elevate ourselves toward a deeper, more nuanced understanding of our shared humanity, as if the very act of creating artificial intelligence revealed to us, in an infinite play of mirrors, the contours of our own consciousness.

A brief cartography of our digital wanderings

Like constellations in the night sky of our technological consciousness, algorithmic biases form recognizable patterns—archetypes of our collective missteps. What follows is a mapping of these digital deviations that, like warped mirrors, reflect the limits of our own perception.

Historical bias: This bias, like an echo of the past reverberating into the present, arises when the historical data feeding an algorithm perpetuates old injustices. Much like ancient sea charts that faithfully copied the errors of earlier cartographers, AI trained on historically biased data reproduces the prejudices of the past, giving them new digital life. Amazon’s recruitment algorithm, which penalized female candidates because the company’s history favored men, is a prime example of this involuntary algorithmic memory—where a discriminatory past becomes the prophet of an equally unequal future.

Representation bias: Like an impressionist painting that captures only a sliver of the light spectrum, this bias emerges when certain groups are underrepresented in the training data. Deprived of the richness of human variation, the AI develops a partial vision of the world, unable to recognize or value what falls outside its limited experience. Facial recognition systems that perform better on light-skinned male faces than on dark-skinned female ones exemplify this selective vision—where some become invisible not out of malice, but due to the blind spots in their algorithmic education.

Sampling bias: Like a biologist drawing conclusions about an entire forest from trees lining a footpath, this bias appears when data collection methods introduce systematic distortions. The algorithm then learns from a non-representative slice of reality, constructing a worldview akin to looking at a landscape through a narrow window—a perspective that is real, but fundamentally incomplete.

Measurement bias: This subtle bias arises when the metrics chosen to assess a phenomenon disproportionately favor certain outcomes. Like an uncalibrated scale that gives undue weight to some ingredients, such systems create invisible but deep imbalances. Teacher evaluation algorithms that focus solely on student grades, ignoring other dimensions of learning, reduce complex realities to a few quantifiable variables—as if one were trying to measure the beauty of a sunset using only its brightness.

Anchoring bias: Like a navigator clinging to their initial heading despite contrary winds, this bias surfaces when early information or values disproportionately influence later decisions. In AI systems, this might take the form of persistent initial models that resist change, even as data evolves—like a river that continues to follow its old course, even after a flood has carved out new channels.

Aggregation bias: When algorithms treat heterogeneous groups as if they were homogeneous, this bias emerges like a painter using a single color to depict all the nuances of a landscape. Credit scoring systems that apply the same criteria to populations with vastly different economic realities exemplify this forced uniformity—like a Procrustean bed that stretches or compresses the truth to fit a pre-set mold, losing in the process the richness of individual differences.

Automation bias: This bias stems from our tendency to place excessive trust in automated systems, presumed infallible because they are free from human subjectivity. Like modern oracles, these algorithms often have their results accepted without question, fostering a blind delegation of our critical judgment. This technological deference—this surrender of discernment to machines—may be the most insidious bias of all: one in which we become complicit in our own shackles, preferring the illusory certainty of automation over the fertile doubt of human inquiry.

Algorithmic confirmation bias: Here, the algorithm acts as a digital echo chamber, amplifying the user’s existing beliefs and preferences. Recommendation systems that endlessly feed us content aligned with our views create a mirrored world in which we encounter only our own opinions—magnified and endlessly validated. This self-referential spiral, this hermetically sealed cognitive bubble, deprives us of the creative friction of difference—the encounter with the other that alone allows our consciousness to expand.

Temporal bias: Like a clock stuck on yesterday’s hour, this bias appears when algorithmic models fail to adapt to social or technological shifts. AI trained on yesterday’s data tries to comprehend today’s world using outdated frames of reference—like a traveler navigating a modern city with a medieval map. Economic forecasting systems that fail to anticipate crises illustrate this tension between the stability of models and the fluidity of reality—this gap between the frozen time of training and the lived time of application.