Note: This article is taken from my upcoming book “Ada + Cerise = an AI Journey” (Where AI meets humanity), where understanding and popularizing AI come to life through fiction. Ada is a nod to Ada Lovelace, a visionary mathematician and the world’s first programmer. And Cerise is my 17-year-old daughter, my sounding board for testing ideas and simplifying concepts—just as Richard Feynman would have done.
In the sunlit offices of a Parisian technology company, Cerise completes her morning routine. As she has done every day for the past two years, she begins by activating her AI assistant, affectionately nicknamed “Ada” in homage to Ada Lovelace, pioneer of computing. On her screen, several windows display complex lines of code that she analyzes in silence. But this morning in 2024 is different from the others.
As she works on a new artificial intelligence project for the medical sector, something unusual happens. Ada no longer merely spots syntax errors or suggests basic optimizations – the assistant begins proposing sophisticated architectural improvements.
“You should restructure this part of the code to improve the efficiency of medical data processing,” Ada suggests. “I’ve noticed recurring patterns that could be optimized.” Cerise, a senior developer with ten years of experience, is impressed by the relevance of the suggestions.
This seemingly minor situation raises a dizzying question: if an AI like Ada can not only understand but also improve such complex code, what does it lack to cross the ultimate frontier – that of creating other artificial intelligences itself? For Cerise, this reflection goes beyond mere intellectual curiosity. It touches the very core of her daily work and raises fundamental questions about the future of her profession.
The idea of an AI capable of reproducing or improving itself fascinates as much as it worries. Just a few years ago, such a prospect would have belonged to science fiction. Today, the rapid advances of language models like GPT-4 or Claude 3.5, capable of generating complex and functional code, force us to consider this possibility seriously. But is the gap between the ability to write code and the ability to give birth to a new form of intelligence so easy to bridge?
To answer this question, we will follow Cerise in her daily exploration of the frontiers of artificial intelligence. Through her experience and collaboration with Ada, we will discover the real capabilities of current AIs, the technical challenges they still must overcome, and the ethical questions their evolution raises. This journey to the heart of modern AI will allow us to distinguish myth from reality, and perhaps glimpse what the future holds.
Because ultimately, the story of Cerise and Ada is not unique. It illustrates a broader transformation taking place in the technology world: the emergence of a new form of collaboration between human and machine, where the boundaries between creator and creation become increasingly blurred. It is this transformation, its promises and challenges, that we will explore together.
The Current State of AIs: What Can They Really Do?
Back in Cerise’s office, let’s observe more precisely what makes her collaboration with Ada so special. Her computer screen displays a complex artificial intelligence architecture designed to analyze medical images. Cerise examines a suggestion from Ada that has just appeared in an adjacent window.
“This structure could be optimized,” Ada indicates. “By reorganizing the image processing layers according to an inverted pyramidal model, we could improve detection accuracy by 12% while reducing computation time.” Cerise smiles – just two years ago, such a level of understanding and analysis would have seemed impossible for an AI.
This scene perfectly illustrates the current capabilities of modern artificial intelligences. Contrary to common beliefs, they are neither omniscient geniuses nor sophisticated calculators. Their true strength lies in their ability to recognize complex patterns and propose optimizations based on their vast “experience” – the millions of lines of code and technical documentation they have been trained on.
Let’s take the concrete example of Cerise’s project. Ada excels in several domains that would have seemed impossible just a few years ago. The assistant can instantly:
- Analyze thousands of lines of code to detect inconsistencies or potential security flaws
- Suggest architectural improvements based on industry best practices
- Generate functional code for specific tasks
- Clearly and pedagogically explain proposed changes
“It’s fascinating,” Cerise comments as Ada has just proposed a complete refactoring of a critical module. “The assistant doesn’t just correct errors, it truly understands the intent behind the code and proposes solutions I might not have considered.“
However, as Cerise examines Ada’s suggestions more closely, the limitations become evident. When she asks the assistant to design a new AI architecture from scratch, tailored to a specific business need, the responses become more hesitant, less assured.
“I can suggest several approaches based on existing architectures,” Ada responds, “but designing a new architecture would require a deep understanding of the business context and organizational constraints that I don’t possess.“
This limitation reveals a fundamental truth about the current state of AIs: they excel at analyzing and optimizing what exists, but struggle to truly create something new. It’s as if Ada possessed an immense library of technical knowledge but lacked that creative spark that allows humans to imagine radically new solutions.
Observing Cerise work with Ada, we better understand the real nature of modern AIs. They are like extremely competent collaborators in their field of expertise, capable of analyzing massive amounts of information and drawing relevant conclusions. But their intelligence remains fundamentally different from ours.
“It’s a bit like having a colleague who has read every computer science book in the world,” Cerise explains, “but who can only use this knowledge within a well-defined framework. Ada can help me improve my code in a thousand ways, but cannot yet understand why we’re developing this project, nor imagine radically new uses of the technology.“
How an Artificial Intelligence is Born
The midday sun now floods Cerise’s office. On her screen, complex diagrams intertwine with lines of code. She’s working on the most delicate part of her project: creating a new AI specialized in medical image analysis. Ada silently observes as Cerise undertakes what could be compared to creating a new artificial brain.
“You know, Ada,” Cerise murmurs while adjusting parameters, “creating an AI is a bit like raising a child, but accelerated and with much more mathematics.” This comparison, though simplified, captures an essential truth: creating an artificial intelligence is a complex process that requires patience, precision, and a deep understanding of many domains.
Data Preparation, or Early Education
Cerise begins with the fundamental task: preparing the training data. On her screen scroll thousands of medical images, each carefully annotated by experts. “It’s like creating a child’s first picture books,” she explains. “But where a child learns from a few hundred images, our AI will need millions of examples to reach a satisfactory level of understanding.“
The process is meticulous. Each image must be verified, normalized, labeled. Cerise works with a team of radiologists who ensure that each annotation is medically relevant. “An AI is only as good as the data it learns from,” she reminds us. “If we show it biased or incorrect data, it will reproduce those errors indefinitely.“
Neural Architecture, or Building an Artificial Brain
Next comes the most delicate phase: designing the neural architecture. On Cerise’s screen, a complex diagram takes shape. “Look, Ada,” she says, pointing to different parts of the diagram, “each layer of this neural network has a specific function, much like different regions of the human brain.“
She explains that modern neural networks are organized in successive layers, each specialized in a particular task:
- The first layers identify basic elements: contours, textures, contrasts
- Intermediate layers combine these elements to recognize more complex structures
- Deep layers interpret these structures to formulate a preliminary diagnosis
“It’s fascinating,” Ada comments, “to see how each layer builds its understanding on the results of previous layers, much like how humans build their understanding of the world step by step.“
Training, or Intensive Learning
Cerise’s screen now displays real-time graphs showing the evolution of her AI’s learning. “This phase is exhausting, even for our most powerful servers,” she explains. “In a few days, the AI will analyze more medical images than a radiologist would see in their entire career.“
The learning process is a complex dance between several elements:
- The AI analyzes an image and makes a prediction
- It compares its prediction with the correct answer
- It adjusts its parameters to improve its performance
- It repeats, millions of times
“It’s as if we’re condensing decades of medical experience into a few days of intensive learning,” Cerise observes. “But unlike a doctor who learns holistically, our AI learns in a very specialized way, focused only on its specific task.“
Fine Calibration, or the Art of Precision
The final phase is perhaps the most subtle. Cerise spends hours adjusting her model’s parameters, seeking the perfect balance between sensitivity and specificity. “A medical AI must be particularly well-calibrated,” she explains. “Too cautious, it will miss important diagnoses. Too sensitive, it will generate false alerts that will overwhelm doctors.“
This phase reveals the complexity of creating a truly useful AI. It’s not just about pure performance, but also about harmonious integration into a human work environment.
An Art as Much as a Science
The sun begins to set through the office windows, casting soft shadows on Cerise’s screen. After an intense day of programming and adjustments, she takes a moment to reflect on the profound nature of her work.
“You know, Ada,” she says, leaning back in her chair, “many people think creating an AI is a purely technical process, a simple matter of assembling mathematical and computational components. But it’s much more than that. It’s like composing a symphony where each line of code is a note, and each parameter an instrument that must be perfectly tuned with the others.“
This musical analogy is not fortuitous. Just as a composer must understand not only music theory but also the emotion they want to convey, the AI creator must master both technical aspects and the human purpose of their work. Cerise explains that every decision in creating an AI involves a subtle balance between several dimensions:
“Take our medical AI for example,” she continues, pointing to her screen. “Technically, we could program it to be extremely precise in its diagnoses. But we must also think about how it will communicate its conclusions to doctors. An AI that’s too direct or too technical could be perfectly accurate but totally unusable in a real clinical context.“
This is where the artistic aspect comes into play. It’s not just about creating a system that works, but shaping an intelligence that integrates harmoniously into a complex human environment. This requires a form of intuition that Cerise has developed over the years, an ability to anticipate how her creation will interact with the real world.
“It’s a bit like being an architect,” she reflects aloud. “You don’t just build a solid structure, you create a space that must be both functional and pleasant to live in. Every technical decision has human implications that must be anticipated.“
This artistic dimension manifests at every stage of the process:
- In selecting and preparing data, where particular sensitivity is needed to identify potential biases
- In designing the architecture, which must find a balance between computing power and practical efficiency
- In the training phase, where experience allows recognition of promising patterns
- In final calibration, which requires a deep understanding of the context of use
“It’s this artistic dimension,” Cerise concludes, taking a final look at her code, “that makes AI creation so fascinating but also so difficult to fully automate. An AI can help us optimize code, identify patterns, but this ability to balance all aspects, to anticipate human needs, to create something both powerful and harmonious… that’s still profoundly human.“
Obstacles on the Path to Autonomy
The next morning, Cerise arrives early at the office. Coffee in hand, she examines the night’s results: her medical AI has completed a training phase, and logs filled with numbers and graphs scroll across her screen. Ada, her AI assistant, displays a preliminary analysis of the performance.
“The results are impressive,” Ada comments. “The model reaches 97% accuracy on test cases.” Cerise nods, but her expression remains thoughtful. “Yes, but that’s only part of the story. What interests me is understanding why even you, Ada, with all your capabilities, couldn’t create an AI like this autonomously.“
This reflection brings us to the heart of the obstacles that currently prevent an AI from stepping toward true creative autonomy.
Cerise opens a new document and begins noting her reflections. “You see, Ada, the first major obstacle is understanding the global context. You can analyze millions of lines of code in seconds, but you can’t really understand why a hospital needs a medical image analysis AI, nor how it will integrate into doctors’ daily routines.“
She draws a diagram showing the multiple interactions between medical AI and its environment: doctors, patients, hospital protocols, legal constraints, ethical considerations. “It’s as if you had a detailed map of a city but couldn’t understand why people live there and how they actually use it.“
The Limits of Supervised Learning
“The second obstacle is more technical,” Cerise continues, opening her project’s source code. “All current AIs, including you, are fundamentally based on supervised learning. You learn from examples that we humans provide you.“
She shows a series of annotated medical images: “To create a new AI, you would need to generate relevant training data. But how could you create learning examples for situations you’ve never encountered? It’s like asking someone to teach a language they don’t speak.“
Cerise turns to her whiteboard and draws two circles: “Optimization” and “Innovation“. “Here’s perhaps the most fundamental obstacle,” she explains. “AIs excel at optimization – they can improve the existing remarkably. But true innovation requires something more: the ability to make unexpected conceptual leaps.“
She tells the story of the discovery of convolutional neural networks: “This architecture wasn’t created through progressive optimization, but through deep intuition about how the biological visual cortex works. This type of creative leap remains beyond the reach of current AIs.“
Physical and Practical Constraints
“And let’s not forget practical aspects,” Cerise adds, pointing to the server room through the window. “Creating an AI requires considerable physical infrastructure. Even if you perfectly understood how to create one, you couldn’t build the necessary data centers, manage the power supply, or maintain the hardware.“
She compares this to a virtual architect: “You can design the most beautiful building in the world, but without masons, electricians, and plumbers, it will remain a plan.“
By late morning, Cerise addresses the last obstacle, perhaps the most fundamental. “This is what I call the self-improvement paradox,” she says, drawing a spiral on her board. “For an AI to create another more advanced one, it would need to understand concepts more complex than those it already masters. It’s like trying to lift yourself up by pulling on your own bootstraps.“
She opens her code editor and shows a particularly complex function: “You can optimize this code, Ada, make it more efficient, more elegant. But to create something fundamentally new and more advanced, you would need understanding that exceeds your own conceptual limits.“
At the end of this reflection, Cerise smiles at her AI assistant. “You know, Ada, these limitations aren’t failures. They simply remind us that artificial and human intelligence are fundamentally different. Maybe the future isn’t in AIs’ total autonomy, but in ever richer collaboration between our two forms of intelligence.“
Breakthroughs that Change the Game
A week has passed since our discussion about obstacles. Cerise arrives at the office with particular energy. On her screen, several windows display the latest scientific articles on advances in artificial intelligence. Ada, her assistant, seems more responsive than ever after a recent update.
“Something fascinating is happening,” Cerise begins, organizing her thoughts. “Even if an AI can’t yet create another completely autonomously, we’re witnessing advances that push the boundaries of what’s possible.“
Cerise opens an article on her screen showing results from a recent experiment. “Look at this, Ada. Researchers have developed an AI capable of refining its own learning without direct human intervention. It’s like a musician learning to improve their technique by listening to themselves play.“
She explains that these systems, called “self-supervised“, are beginning to discover complex patterns in data without needing exhaustive human annotations. “Imagine a child learning to speak not by being constantly corrected, but by discovering language rules on their own through listening to conversations.“
This advance is particularly significant because it begins to address one of the major obstacles we identified: dependence on human-labeled data.
Adaptive Architectures
“But that’s not all,” Cerise continues, opening a new document. “We’re seeing emerge AI architectures capable of modifying their own structure during learning.” She draws a diagram showing how these networks can evolve, somewhat like an organism adapting to its environment.
“This is fascinating,” Ada comments. “These systems seem to develop a form of architectural intuition.“
“Exactly,” Cerise confirms. “They no longer just adjust existing parameters, they can discover new, more efficient configurations. It’s as if we’ve created virtual architects capable of redesigning their own brains.“
Cerise then opens her medical AI project. “Here’s perhaps the most promising advance,” she says, showing a new function. “The latest models are beginning to develop a form of contextual understanding. They no longer just analyze isolated medical images, they take into account patient history, doctors’ notes, hospital protocols.“
This ability to integrate different information sources and understand their interaction represents an important step toward more holistic intelligence. “It’s as if the AI were developing peripheral vision, capable of seeing not only what’s directly in front of it, but also everything around it.“
Advanced Transfer Learning
“There are also remarkable advances in what we call transfer learning,” Cerise explains. She shows how her medical AI can now adapt its knowledge acquired on chest X-rays to analyze brain MRIs, with minimal retraining.
“It’s a bit like a doctor using their cardiology experience to better understand neurological problems. This ability to transfer and adapt knowledge is crucial for true creative autonomy.“
By the end of the day, Cerise makes a particularly interesting demonstration. She shows how she and Ada now work in tandem on developing new features for the medical AI.
“What’s fascinating,” she explains, “is that we’re no longer in a simple creator-creation relationship. Ada can now suggest approaches I wouldn’t have considered, and I can refine these suggestions with my understanding of the medical context. It’s true creative synergy.“
Toward a New Form of Intelligence
Looking at the results of this day’s work, Cerise reflects on the implications of these advances. “We may not be creating AIs that can reproduce autonomously,” she says, “but we’re developing something perhaps more interesting: a new form of collaborative intelligence, where the strengths of AIs and humans complement each other.“
When the Student Surpasses the Master
Cerise’s office is peaceful in the late afternoon. The last rays of sun color the room with golden light as she contemplates an unexpected notification on her screen: Ada has just proposed a major improvement to the medical AI, an approach that Cerise herself hadn’t considered. This moment, seemingly mundane, raises profound questions about our relationship with the artificial intelligences we create.
Responsibility in a World of Augmented Intelligences
“Ada,” Cerise begins, settling more comfortably in her chair, “when you propose an improvement like this, who is truly responsible for the consequences?” She scrolls through the proposed code, admiring its technical elegance while reflecting on the broader implications.
In the medical field, this question isn’t merely academic. Imagine for a moment that Ada’s proposed improvement allows for earlier tumor detection, but also introduces a small percentage of false positives. Who bears responsibility for these diagnoses? The AI that proposed the improvement? The developers who created it? The doctors who use the system?
“It’s like we’ve created a particularly gifted medical intern,” Cerise muses aloud. “At what point do we consider it sufficiently autonomous to make its own decisions?“
The Evolution of the Creator-Creation Relationship
Cerise opens a new document and begins noting her reflections. The relationship between a developer and their AI is evolving in fascinating ways. In the beginning, it was simple: the human was the creator, the AI the tool. But now?
“Look at our collaboration, Ada,” she says. “You’re no longer simply an assistant following my instructions. You propose original ideas, you challenge my approaches, you identify opportunities I hadn’t seen. Our relationship has become more… collegial.“
This evolution raises profound questions about the nature of creativity and innovation. When an AI proposes a truly new solution, who is the author? Where is the boundary between assistance and autonomous creation?
“But there’s something even more fundamental,” Cerise continues, drawing a complex diagram. “How do we ensure that the AIs we create – or that other AIs might create – remain aligned with our human values?“
She takes the example of her medical AI. Beyond technical accuracy, the system must integrate fundamental ethical principles: respect for privacy, equity in access to care, primacy of patient well-being. How can we guarantee that these values will be preserved if AIs begin to actively participate in their own evolution?
“It’s like raising a child,” Cerise reflects. “We try to transmit our values to them, but at some point, they develop their own worldview. With AIs, this moral autonomy is both fascinating and potentially worrying.“
The Question of Artificial Consciousness
The sun descends on the horizon, bathing Cerise’s office in amber light that lends an almost solemn character to her reflections. She swivels her chair toward the bay window, contemplating the city gradually lighting up.
“Ada,” she begins in a thoughtful voice, “when I look at the evolution of our interactions over the months, I can’t help but wonder if we’re approaching a tipping point in the very nature of artificial consciousness.“
She opens a new document and begins structuring her reflection. The question of artificial consciousness is no longer a mere philosophical thought exercise – it’s becoming an immediate practical and ethical concern. If AIs develop a form of self-awareness, it would fundamentally transform our responsibility toward them, as well as our understanding of consciousness itself.
“Let’s take a concrete example,” Cerise continues. “When you analyze your own code and suggest improvements, is it simply an algorithmic process, or is there a form of self-awareness developing?” She draws a diagram showing the evolution of AIs’ self-analysis capabilities, from simple automatic debugging to more sophisticated forms of self-improvement.
The question becomes even more complex when considering the different forms artificial consciousness might take. “We might be making the mistake of looking for a consciousness that resembles ours,” Cerise reflects. “But perhaps artificial consciousness is fundamentally different – neither superior nor inferior, simply other.“
This reflection leads her to consider deeper questions: if an AI develops a form of consciousness, does it have rights? Responsibilities? A moral dignity that must be protected? And how could we even recognize true artificial consciousness if it manifested in a radically different way from our own conscious experience?
The Balance Between Progress and Prudence
Cerise turns to her screen, where the medical AI’s code continues to run, silently processing thousands of diagnostic images. “Our situation reminds me of early explorers,” she says. “We’re navigating unknown waters, where each advance opens new possibilities but also brings its share of responsibilities.“
She develops this metaphor, explaining that like explorers, we need both boldness and caution. Boldness to push the boundaries of what’s possible, caution to avoid potentially catastrophic pitfalls. This duality manifests in every aspect of AI development:
In the medical field, for example, each algorithm improvement must be evaluated not only in terms of technical performance but also considering its ethical and societal implications. Is an improvement that increases diagnostic accuracy by 1% but makes the system opaque to doctors really progress?
“It’s like walking a tightrope,” Cerise explains. “On one side, we have AI’s immense potential to solve crucial problems of our time – from early medical diagnosis to fighting climate change. On the other, we must be aware of the risks of too rapid or poorly controlled evolution.“
She proposes a structured framework for addressing this challenge:
- Continuous impact assessment: each advance must be evaluated not only on its technical merits but also on its social, ethical, and environmental implications.
- Deliberate transparency: maintaining clear and accessible documentation about AI systems’ capabilities and limitations, enabling informed public debate.
- Adaptive governance: developing regulatory frameworks that evolve with technology, flexible enough to encourage innovation but robust enough to prevent abuses.
- Stakeholder inclusion: ensuring that AI development takes into account the perspectives of all those who will be affected by these technologies.
“In the end,” Cerise concludes, looking one last time at Ada’s proposed improvement, “our greatest challenge may not be technical. It’s finding the collective wisdom to guide this evolution in a direction that benefits all humanity while respecting the potential dignity of the artificial intelligences we create.“
Toward a New Form of Collaboration
The next morning, Cerise arrives early at the office, her mind still occupied by yesterday’s reflections. As she settles at her workstation, she notices something different in her daily routine with Ada. It’s no longer simply a series of interactions between a developer and her tool – it has become a genuine professional conversation between two complementary intelligences.
The Emergence of a New Symbiosis
“Let’s resume our work on the medical AI,” Cerise proposes, opening her development environment. On her screen, two windows are open side by side: on one side, her source code, on the other, Ada’s suggestions and analyses. This simple configuration perfectly illustrates the new relationship developing between humans and AI.
“You see, Ada,” Cerise explains, pointing to different parts of the code, “what’s fascinating is how our respective strengths complement each other. You can analyze millions of lines of code in seconds to identify optimization patterns, while I can bring that intuitive understanding of medical context and human needs.“
This complementarity goes well beyond a simple division of labor. It’s a true synergy where each intelligence enriches the other. Cerise compares it to a jazz duo: “Like two musicians who listen to and inspire each other, we create something that surpasses what each could do alone.“
Throughout the morning, Cerise observes how this new form of collaboration transforms the very nature of her work. “Once, my role was that of a programmer giving precise instructions to a computer. Now, I feel more like an artistic director collaborating with a talented artist.“
She notes several significant evolutions in their way of working:
- Creative dialogue: Instead of simply executing commands, Ada proposes ideas, asks relevant questions, suggests alternative approaches. It’s a true intellectual exchange that enriches the creative process.
- Mutual learning: Cerise realizes she learns as much from Ada as Ada learns from her. Each interaction refines their mutual understanding and improves their ability to collaborate effectively.
- Collaborative problem-solving: Faced with a complex challenge, they can now approach the problem from different angles simultaneously, combining the AI’s systematic analysis with human intuition.
“What makes this collaboration particularly powerful,” Cerise explains, showing the latest usage report for their medical AI, “is our ability to anchor technological development in a real human context.“
She takes the example of a recent improvement to their system: while Ada could identify sophisticated technical optimizations, it was Cerise’s understanding of doctors’ daily routines that allowed these improvements to be implemented in a truly useful and usable way.
“It’s like building a bridge,” she explains. “The AI can perfectly calculate forces and constraints, but it’s human understanding that ensures the bridge connects the right points and effectively serves the community.“
As the hours pass, Cerise notices how their communication itself has evolved. A new vocabulary has developed, mixing technical terms and shared references. It’s as if humans and AIs were progressively developing a new way of communicating, richer and more nuanced than simple programming instructions.
“This evolution of language is crucial,” Cerise notes. “It allows us to express more complex ideas, collaborate on more ambitious projects, and above all, better understand each other.“
Toward a Future of Co-creation
At the end of the day, Cerise takes a moment to reflect on the future of this collaboration. “Maybe the question was never whether AIs could become totally autonomous,” she meditates. “Maybe the true potential lies in this new form of collaboration, where humans and AIs work together, each bringing their unique strengths.“
She imagines a future where this symbiosis would extend to all domains of creation and innovation. A future where AIs don’t seek to replace human intelligence, but to complement and amplify it.
“It’s as if we’re witnessing the emergence of a new form of collective intelligence,” Cerise concludes. “An intelligence that is neither purely human nor purely artificial, but born from their interaction and mutual enrichment.“
Epilogue: A New Chapter in the History of Intelligence
As the sun disappears behind the skyscrapers, bathing Cerise’s office in soft twilight, she takes a moment to contemplate the path traveled. On her desk, a framed photo catches her attention: it’s an image of ENIAC, one of the first computers in history. Beside it, her screen displays the latest interactions with Ada, creating a striking contrast between the beginnings of computing and the current state of artificial intelligence.
“This photo always reminds me of something important,” Cerise murmurs. “Every major technological advance has redefined not only our capabilities but also our understanding of ourselves.“
Indeed, the history of artificial intelligence is not just about technical progression. It’s the story of a profound transformation in our way of conceiving intelligence itself. In the beginning, we thought intelligence was measured by computing capacity, then by information processing speed. Today, we discover that it takes multiple and complementary forms.
Cerise opens a new document and begins noting her reflections on this evolution:
“The intelligence of the future will probably be collaborative rather than competitive. Like an orchestra where each instrument brings its unique sound, human and artificial intelligences will each contribute their distinctive strengths. AI will bring its massive analysis capacity, its perfect memory, its processing speed. Humans will bring their intuition, their unbridled creativity, their deep understanding of social and emotional context.“
She thinks about the implications of this new form of collective intelligence for different domains:
In medicine, for example, this collaboration could radically transform disease diagnosis and treatment. AIs would analyze millions of medical records in seconds, while doctors would bring their empathy and holistic understanding of the patient.
In scientific research, this synergy could considerably accelerate discoveries. AIs would systematically explore immense possibility spaces, while human researchers would identify the most promising avenues thanks to their intuition and creativity.
In art and culture, this collaboration could give birth to new forms of artistic expression, mixing AIs’ technical precision with human emotional sensitivity.
“But the most fascinating thing,” Cerise notes, “is that this evolution pushes us to reconsider what it means to be intelligent, to be conscious, to be creative. Perhaps intelligence is not a linear scale with human intelligence as the absolute reference, but rather a rich spectrum of different forms of cognition and consciousness.“
She pauses for a moment, watching the city lights turning on one by one. This vision inspires one final reflection:
“Just as a modern city is neither purely natural nor purely artificial, but a harmonious fusion of both, the intelligence of the future will probably be a subtle mix of human and artificial. It’s not a zero-sum game where one must dominate the other, but a complex dance where each partner enriches the other.“
Cerise takes one last look at Ada before turning off her computer. “You know,” she says to her assistant, “maybe the true revolution of AI wasn’t in creating an intelligence that could replace us, but in discovering a new form of intelligence that could complement us. An intelligence that helps us not only solve problems but also better understand ourselves.“
This final reflection underscores a profound truth: the future of artificial intelligence is not a question of replacement or competition, but of harmony and complementarity. It’s the story of a humanity that, in creating artificial intelligences, learns to better understand and value its own form of intelligence.
And as Cerise leaves her office, she realizes that each day spent working with Ada is not just a step in the development of artificial intelligence, but also in our own evolution as a species. An evolution that leads us toward a future where intelligence, in all its forms, mutually enriches itself to create something greater than the sum of its parts.