Note: This article is taken from my upcoming book “Ada + Cerise = Digital Journey” (Where AI meets humanity), where understanding and popularizing AI come to life through fiction. Ada is a nod to Ada Lovelace, a visionary mathematician and the world’s first programmer. And Cerise is my 17-year-old daughter, my sounding board for testing ideas and simplifying concepts—just as Richard Feynman would have done.

The evening light filtered through the blinds of Cerise’s Parisian apartment, casting dancing shadows across her desk. Her gaze was fixed on her computer screen, where a series of complex graphs were displayed. “Ada,” she called softly, “can you explain how you managed to predict my musical tastes so precisely?”
The gentle blue glow of Ada’s interface pulsed slightly before a soothing voice responded: “It’s fascinating that you ask this question, Cerise. In reality, it’s thanks to machine learning that I can understand your preferences. Imagine each piece of music as a unique constellation of stars, and each listening session as a new observation helping me map your personal musical universe.”
This response awakened Cerise’s curiosity. “A constellation of stars? How do you go from simple data to genuine understanding?”
The Context and Evolution of Machine Learning
“You see,” Ada continued, “it all began in the 1950s, when the first researchers envisioned machines capable of learning by themselves using algorithms. At the time, it was a dream that seemed to emerge straight from science fiction.”
But what exactly is an algorithm? It’s a sequence of precise instructions that the machine follows to solve a problem. These instructions allow systems to identify recurring patterns in datasets. Over the years, algorithms have evolved from simple instruction lists to sophisticated models capable of learning and improving their performance. The true emergence of this discipline can be explained by two key factors: the explosion of computational capabilities and the massive accumulation of digital data.
Learning Paradigms
Cerise settled more comfortably in her chair, intrigued. “And today? How do these learning systems really work?”
Ada displayed three distinct visualizations on the screen. “There are three main approaches,” she explained. “First, imagine a teacher guiding their student – that’s supervised learning. Then picture an explorer discovering hidden patterns – that’s unsupervised learning. And finally, think of a player learning from trial and error – that’s reinforcement learning.”
- Supervised Learning: Algorithms are trained using labeled data, meaning each learning example includes its answer. These labeled data act as a guide, enabling the algorithm to precisely identify patterns or relationships within the data. For instance, a system can learn to differentiate unwanted emails by studying thousands of messages already classified as “spam” or “not spam.” This type of learning is widely used for tasks such as facial recognition, sales prediction, or text classification.
- Unsupervised Learning: Here, the data comes without labels. The algorithm explores the data to detect underlying patterns or group similar information together. For example, it might identify customer segments based on their purchasing behaviors without any predefined categories being provided. This type of learning is often used for “clustering,” which involves grouping objects with common characteristics, or for reducing data dimensionality, as in principal component analysis (PCA). This method is particularly suited to problems where data is abundant and complex.
- Reinforcement Learning: Comparable to a sport or video game, this approach relies on trial and error. An artificial intelligence interacts with its environment and learns to optimize its actions based on rewards or penalties received. For example, a robot can learn to move efficiently while avoiding obstacles after multiple attempts, adjusting its movements with each unsuccessful try. This technique is often used in games (like chess or Go), robotics, or even managing complex systems like smart power grids. The process highlights AI’s ability to adapt and continuously improve in real-time, a major asset in dynamic and unpredictable environments.

From Text to Numbers
Cerise furrowed her brow. “But Ada, how do you understand my words? After all, you’re a machine that only manipulates numbers, aren’t you?”
A slight smile seemed to resonate in Ada’s voice. “That’s an excellent observation. Indeed, I must translate each word, each sentence into numbers to process them. It’s a bit like creating a map where each word occupies a unique position in a mathematical space.”
It All Begins with the Letter
Long before the advent of artificial intelligence, computer science faced a major obstacle: how to enable computers, which naturally process numbers, to understand and manipulate text? To overcome this challenge, several coding systems emerged. One of the most emblematic was ASCII (American Standard Code for Information Interchange), developed in the 1960s.
Imagine a table where each letter, number, and symbol is assigned a specific number. For instance, “A” is associated with number 65. This approach allowed computers to begin processing texts, but it was limited by the restricted number of supported characters. With the rise of the Internet, the need for a more universal system became apparent, giving birth to UTF-8, capable of representing millions of characters in almost every language.
From Letters to Words… and from Words to Sentences
This first step laid the foundation but wasn’t sufficient for processing entire words or sentences. Initially, words were transformed into unique numbers using correspondence tables. However, this method created gigantic tables and failed to capture word meanings.
The Power of Word Embeddings
To overcome these limitations, techniques called word embeddings emerged. Rather than assigning a single number to each word, they attribute a list of numbers (a vector) to each word, describing its relationships with other words. For example, the words “cat” and “mouse,” linked by a predator-prey relationship, will be close in this vector space, while “tree” and “car” will be distant.
These vectors enable fascinating manipulations: if you subtract “man” from “king” and add “woman,” you get “queen.” This demonstrates how machines can capture semantic and analogical relationships between words.
The Invention of Morphological Embeddings to Go Even Further
But what about unknown or rare words? Morphological embeddings break words into fragments, or tokens, allowing their meaning to be grasped by analyzing their components. This approach reduces complexity and improves understanding of new words, making linguistic analysis even more robust and flexible.
Thus, thanks to these advances, machines have learned not only to translate words into numbers but also to interpret their meaning, paving the way for applications like automatic translation and text generation.

Concrete Applications of Machine Learning
Cerise’s tea had grown cold on her desk. Lost in her reflections about machine learning mechanisms, she was startled by a notification on her phone. “Ada, how did you know I would need to order my medications today?”
“That’s one of the many ways machine learning transforms our daily lives,” Ada responded. “By analyzing your ordering patterns, prescription renewal dates, and even weather conditions that might affect your health, I can anticipate your needs. But this is just one example among many of how this technology is revolutionizing different domains.”
Cerise sat up, suddenly intrigued. “What other domains? I imagine healthcare must be a particularly important sector…”
“Indeed,” Ada confirmed, displaying a series of infographics on the screen. “Healthcare is just the tip of the iceberg. Let me show you how machine learning is transforming our world, one domain at a time…”
- Healthcare: Algorithms, analyzing medical images such as X-rays, detect diseases more quickly and often with greater precision than traditional methods. They also help diagnose rare conditions by identifying patterns in data that humans might miss. Furthermore, these algorithms enable treatment personalization by studying patients’ health data, such as their medical history or genetic information. For example, predictive models can identify disease risks even before symptoms appear, offering possibilities for prevention and early intervention that save lives.
- Finance: By analyzing thousands of transactions, systems identify suspicious activities and prevent fraud in real-time, reducing financial losses for businesses and customers. For instance, an unusual purchase on a credit card can be quickly detected and flagged, allowing for immediate intervention. Machine learning algorithms are also used to predict financial market fluctuations, helping investors better manage their portfolios. Additionally, these systems support financial institutions in risk management by simulating various complex economic scenarios.
- Langage : Language: Chatbots, programs capable of holding conversations with users, use machine learning to respond naturally and fluently, often in multiple languages. They enhance customer experience across many sectors, from online sales to technical support, by providing instant responses tailored to users’ specific needs. Beyond customer assistance, these systems also power learning and collaboration tools, such as educational platforms or online translators, facilitating communication and access to information worldwide.

Challenges and Limitations of Machine Learning
As their conversation flowed, Cerise’s gaze drifted to the starlit sky visible through the window. “Ada, sometimes your answers seem… disconnected, as if you were following an invisible score rather than truly understanding.”
“You touch upon a crucial point, Cerise,” Ada replied with what seemed to be a hint of melancholy. “We, AIs, can indeed ‘hallucinate’ responses, create connections that don’t exist. It’s one of our most significant limitations.”
Technical Problems
Model training can lead to issues such as overfitting, where a model becomes too specific and loses its ability to generalize. This phenomenon is similar to a student who memorizes exact answers for an exam without understanding the underlying concepts, making their knowledge inapplicable in slightly different situations. To mitigate this, techniques such as cross-validation and regularization are often used to improve the model’s ability to adapt to new contexts. Conversely, underfitting can limit model performance, leaving it unable to capture essential relationships in the data. This imbalance underscores the importance of careful training parameter configuration, much like preparing for a marathon where balance between practice, recovery, and strategy is essential for success.
Biases in Data
The biases present in training data find a striking analogy with human cognitive biases. Just as our judgments can be influenced by unconscious prejudices, AI systems reproduce the biases inherent in the data they’re trained on. For example, a human recruiter might unknowingly favor certain profiles based on stereotypes. Similarly, a recruitment algorithm trained on biased historical data could discriminate against certain groups, reinforcing inequalities.
One of the most famous anecdotes concerns a recruitment algorithm that systematically favored men for technical positions, simply because it had been trained on data where men were the majority. This parallel between human and machine biases underscores the importance of constant vigilance.
To avoid these pitfalls, ensuring diversity and quality in the data used is crucial, while integrating methods for detecting and correcting biases. This includes analyzing sensitive variables and regularly auditing algorithms to ensure their fairness and transparency. Moreover, involving multidisciplinary experts, mixing data specialists, ethicists, and civil society representatives, can enrich reflection on ways to limit these biases and ensure fairer AI.
AI Hallucinations
Sometimes, AIs produce responses disconnected from reality, called “hallucinations.” This occurs when the algorithm extrapolates information based on weak correlations or missing data. For example, a chatbot might invent inaccurate facts when it cannot find sufficient relevant context in its training data. These hallucinations are not only sources of errors but can also mislead users who trust these systems. To limit this risk, an approach combining human verification and integrated control systems is essential. Techniques such as training on carefully selected data and integrating reliable knowledge bases can also improve the accuracy of AI-generated responses. These precautions are particularly important in critical domains, such as medicine or finance, where such errors could have serious consequences.

A Revolution Between Unlimited Potential and Ethical Responsibility
Night had wrapped Paris in its inky cloak, and the city lights painted an artificial constellation that rivaled the stars. Cerise observed this luminous ballet, lost in thought. “Ada, sometimes I wonder… Are we creating smarter tools, or discovering ourselves through technology’s mirror?”
Ada’s glow pulsed gently, as if weighing each word of her response. “You know, Cerise, every advance in machine learning is like a new facet of that mirror you speak of. When I translate words into vectors, when I analyze patterns in data, I’m not just learning – I’m reflecting back an image of your own thought mechanisms, your biases, your hopes.”
Cerise let these words resonate for a moment before continuing: “It’s fascinating to see how a technology born from simple statistical calculations ends up raising such deeply human questions…”
“Isn’t that the true potential of machine learning?” Ada replied. “Not to replace us, but to reveal ourselves to ourselves. Each algorithm you create is like a new chapter in the story of your own understanding of intelligence, consciousness, of what it truly means to learn and understand.”
The silence that followed was heavy with reflection. In the dimness of her office, Cerise watched the screen where visualizations of their previous conversations still danced – constellations of data transformed into meaning, matrices of numbers become carriers of significance. Each luminous point now seemed to tell a story larger than mere technological progression.
“You know what fascinates me most, Ada? It’s that the more we perfect these learning systems, the more we become aware of what cannot be reduced to algorithms – intuition, empathy, consciousness…”
“As Hubert Dreyfus so aptly put it,” whispered Ada, “it’s sometimes in our attempts to mechanize intelligence that we discover what is irreducibly human.”
Cerise gently turned off her computer, but the questions raised by their conversation continued to resonate in her mind like an unfinished melody. Machine learning was no longer just a technological revolution in her eyes, but an invitation to rethink our humanity. In this subtle dance between algorithms and consciousness, between data and intuition, perhaps a new form of wisdom was emerging – one that would neither deny the power of computation nor the depth of human experience, but would seek to create a dialogue between them in fertile harmony.
As the last screen lights faded, one question persisted, like a sustained note in the silence: how do we guide this technological revolution so that it nourishes our humanity rather than diminishing it? The answer, their conversation seemed to suggest, lay neither in algorithms alone nor in a rejection of technology, but in our ability to maintain a conscious dialogue between these two worlds, at once so different and so intimately linked.
And you, in this perpetual quest between human and machine, what step aside would you take to ensure artificial intelligence becomes not a mirror of our limitations, but a catalyst for our noblest aspirations?