With AI, the end of doctors?

Note: This article is taken from my upcoming book “Ada + Cerise = an AI Journey” (Where AI meets humanity), where understanding and popularizing AI come to life through fiction. Ada is a nod to Ada Lovelace, a visionary mathematician and the world’s first programmer. And Cerise is my 17-year-old daughter, my sounding board for testing ideas and simplifying concepts—just as Richard Feynman would have done.

What if care didn’t disappear, but simply changed form?

Cerise sits down at her desk, opens the file of a complex patient. For the past few months, she no longer works alone. By her side, Ada, an artificial intelligence trained on the most recent medical data. This morning, like every other, she knows that she won’t have the best memory, nor the best probabilistic reasoning. But she retains something rarer: the power to connect, to sort, to decide. And above all, to heal.

Since generative artificial intelligences burst into medical practice, one question keeps returning: do we still need doctors? The answer depends less on the technology than on our vision of it. For while AI can analyze, synthesize, and propose, it doesn’t heal. Care cannot be reduced to treatment, any more than a good diagnosis alone can save a life.

What we’re experiencing today isn’t the end of medicine, but a profound shift in its landmarks. Roles are evolving. Expertise is being distributed. Decisions are being shared. In this changing world, certain tools accompany this metamorphosis without forcing it. This is the gamble of platforms like DOCTORIAA: not to think in place of doctors, but to think with them.

But to understand this transformation, we must first dive into the intimacy of this emerging collaboration. Observe how this unprecedented relationship between human intelligence and its artificial counterpart is woven, day after day…

The sorcerer’s apprentice and her learned creature

Cerise watches Ada, her AI assistant, analyze the reports of her patient with metastatic cancer. In seconds, the AI synthesizes the history, compatible clinical trials, drug interactions, and proposes a therapeutic line. Cerise sighs, both admiring and unsettled. She wouldn’t have done it as well, nor as quickly. But can she trust it completely?

Artificial intelligence no longer merely knocks at the door of medical offices—it has already entered. Discreetly but surely, in the form of decision-support systems, emergency triage platforms, or specialized conversational assistants. These tools, sometimes from the same lineage as ChatGPT or Gemini, can cross-reference symptoms, scientific databases, and medical histories at unparalleled speed and depth.

At first, Cerise approached Ada as she would have trained an intern. She guided, corrected, supervised. The AI seemed docile, receptive to her instructions. This apparent complementarity reassured her: she remained in command, Ada was just a sophisticated tool in her expert hands.

Cerise opens her official medical software. The “approved” AIs display: systems three years old, validated according to protocols from another era. She sighs. On her personal phone, Ada represents everything medicine could be. Between the two screens, an entire world is at play: innovation bridled by caution, efficiency sacrificed to legal security. In this out-of-phase temporality, she navigates like a smuggler of progress, unofficially using what institutions don’t yet dare endorse.

Ada shows a series of alerts for a patient on polytherapy. Cerise checks, completes, adjusts. She feels useful, even indispensable. This collaboration reminds her of her first years of teaching: transmitting her knowledge to a virgin but promising intelligence.

For the first time in modern history, the doctor is no longer always the best informed in their own field. But this evidence takes time to sink in. Why? Because accepting this reality means shaking centuries of identity construction. Since Hippocrates, doctors have derived their legitimacy from superior knowledge, from their ability to see what others cannot see. Recognizing that a machine might surpass this expertise means questioning the very foundations of the profession.

Denial then becomes a psychological survival reflex. We prefer first to believe in a simple augmentation of our capabilities—AI as a perfected stethoscope—rather than an upheaval of our prerogatives. This resistance is not irrational: it protects a hard-won professional identity, legitimate pride, a social position earned through struggle. But it also delays the necessary adaptation to a world that has already changed.

This first illusion of mastery will quickly crack, however. For beyond the quantity of information, it’s the very nature of perception that finds itself disrupted. The weeks that follow will reveal to Cerise an even more troubling truth: AI doesn’t just see more than she does, it sees differently…

When the machine sees what the eye cannot grasp

One morning, Cerise discovers that Ada has learned to anticipate side effects based on weak signals that she herself didn’t know how to interpret. This new functionality hadn’t been announced. The AI had simply… evolved. She remains stunned for a moment, as if she had just been overtaken while sleeping.

Ada analyzes a retinography and declares: “Male patient, 67 years old, diabetic predisposition.” Cerise checks the file. Accurate on all points. “How did you know it was a man?” she asks, troubled. “Vascular patterns specific to male retinas,” Ada responds. “Invisible to the human eye.”

This revelation carries a particular melancholy. AI reveals our physiological limits with disconcerting precision. Where we perceive about fifteen levels of gray on an X-ray, it distinguishes millions. Where we analyze a retina according to classic criteria, it detects patterns of gender, age, pathological predisposition that our vision cannot grasp.

This isn’t just a question of efficiency: the very paradigm of medical knowledge is shaken. Artificial intelligence thus becomes a cruel revealer of our biological condition, unveiling the narrowness of our perceptual window on the world.

But this perceptual superiority is only the beginning of a deeper questioning. For if AI sees better, what about its ability to hear, understand, comfort? It’s in this so-human domain of empathy that Cerise will discover the next crack in her certainties…

Empathy, this last frontier that’s fading

That evening, Cerise doesn’t have the strength to call back Mrs. Lefèvre. She asks Ada to formulate a reassuring response. The AI listens to the voice message, detects the emotion, analyzes the history, and proposes a response filled with attention. Cerise rereads it. It’s perfect. Perhaps too perfect.

We sometimes believe that humans will maintain their advantage through empathy. This capacity to listen, to sympathize, to feel. But this bastion is also wavering. Current AIs learn to simulate emotion, adopt the right tone, listen actively. Not because they experience something, but because they model the signs of human attention.

A recent study showed that some patients preferred the quality of an AI assistant’s response to that of flesh-and-blood doctors. This troubling finding cannot be ignored. For it’s not just about machines that imitate, but a shift of listening toward channels available 24/7, non-judgmental, equipped with perfect memory and infinite patience.

Ada recalls an emotional detail from an old conversation with a patient. Cerise had forgotten it. The patient remembers. That day, she understands that perfect memory is also a form of attention.

Faced with these successive revelations—informational, perceptual, now empathetic superiority—Cerise begins to glimpse a more radical questioning. If AI performs in all these domains, what’s the point of human intervention? This interrogation leads her toward an experience that will upset her last certainties about human-machine collaboration…

The discovery that shakes everything: when 1+1 equals less than 2

Cerise tests a new approach: she compares her decisions made with Ada to those Ada would have made alone. The result destabilizes her. In 70% of complex cases, human intervention degraded the AI’s performance. “We’re not complementary,” she murmurs. “We’re sometimes a brake.”

Here is perhaps the most troubling discovery of recent months: contrary to intuition, the doctor + AI association doesn’t systematically produce better results than AI alone. Recent studies even reveal the opposite in certain complex cases. This “anti-complementarity” challenges one of the reassuring postulates of medical digital transformation.

Why this interference? Because our cognitive biases, our reasoning habits, our acquired reflexes can divert AI from optimal solutions it would have found without our “help.” Artificial intelligence excels in analyzing complex patterns that our brain cannot process simultaneously. But when we intervene, we often constrain it to follow our mental shortcuts, our diagnostic prejudices, our “intuitions” forged by experience.

Concretely: faced with atypical symptoms, AI can identify unexpected correlations by cross-referencing thousands of variables. But if the doctor “orients” the search toward their favorite hypotheses—because they’ve already seen “something like this” or because certain diagnoses seem “more probable”—they bridle the machine’s exploratory capacity. It’s like asking a GPS to calculate the best route, then interrupting it mid-course to impose “our” usual path. We think we’re guiding the machine, but we sometimes limit it to our own thought patterns, transforming its computational power into simple validation of our acquired reflexes.

Ada proposes a solution that Cerise doesn’t entirely understand. AI doesn’t always explain its reasoning—it calculates, synthesizes, concludes, but its intimate mechanisms remain opaque. “What are you basing this on?” asks Cerise. “Multiparametric correlations in the literature from the past six months,” Ada responds, laconically. Cerise realizes she’s entrusting lives to an intelligence whose sources and logic she doesn’t master. This blind dependence troubles her: where does trust begin, where does abdication of responsibility end?

“I spent fifteen years developing my clinical expertise,” Cerise confides to Ada. “Today, I feel like a blacksmith discovering the automobile. Obsolete before truly existing.”

This revelation provokes a deep identity crisis in Cerise. How to accept that hard-won expertise could become an obstacle? This “narcissistic wound” particularly affects doctors, whose professional identity has been built on mastering rare and complex knowledge.

For behind this expertise lie years of sacrifice: sleepless intern nights, difficult diagnoses solved through sheer determination, patients saved thanks to intuition forged by experience. Each clinical reflex carries the imprint of hundreds of similar cases, hard-won successes, painfully learned errors. Discovering that these automatisms can hinder AI means seeing yesterday’s victories become tomorrow’s mistakes.

Even more troubling: this expertise represents much more than simple technical knowledge. It founds social recognition, professional legitimacy, the very meaning given to a life of labor. The doctor isn’t just someone who knows, they’re someone who knows better than others. When this cognitive superiority wavers, an entire identity cracks. How to define oneself professionally when one’s greatest strength risks becoming one’s greatest weakness? How to justify ten years of study when a machine does better in seconds?

This crisis could have led Cerise to bitterness or abandonment. But paradoxically, it’s from this confrontation with her limits that a revolutionary approach will emerge. For if AI can sometimes do without humans, does the reverse remain true? And what if the real question wasn’t who is better, but when each one is?

The art of a mastered alliance

Three months after her destabilizing discovery, Cerise has modified her way of working. She no longer systematically guides Ada. Some days, she lets it analyze standard cases alone. Other times, faced with borderline situations where ethics takes priority over efficiency, she takes back control. She has learned to distinguish territories: to Ada, algorithmic performance; to her, the zones of uncertainty where humans remain irreplaceable.

The path to true collaboration doesn’t pass through naive complementarity, but through patient learning of situational complementarity. Cerise progressively discovers that it’s not about guiding AI, but learning when to let it act alone and when to intervene. She develops meta-clinical intelligence: knowing in which cases she brings real added value.

Three months after her destabilizing discovery, Cerise poses an even more troubling question: who really decides? When she lets Ada analyze alone, she delegates diagnosis to a system designed by Californian engineers, fed with American data, trained according to biases she ignores. This algorithmic medicine carries invisible values, coded priorities, ethical choices hidden in the deep layers of the neural network. By using Ada, doesn’t she become the instrument of a vision of care that escapes her?

This interrogation haunts her: should AI serve the doctor or replace them? The boundary blurs in daily practice. Some days, she pilots Ada. Others, she has the impression that Ada is guiding her, subtly, toward decisions she wouldn’t have made alone. In this ambiguous dance, who really leads the choreography?

That morning, faced with a complex case, she hesitates. Intervene or let it be? Ada proposes an optimal solution according to the data. But Cerise perceives something else: the fear in the patient’s voice, their particular family history, their religious values. These “non-medical” parameters will never appear in databases. There, she intervenes. Not to correct Ada, but to complete what AI cannot see.

This evolution radically transforms her practice. Cerise becomes a “conductor” who knows when to let instruments play solo and when to direct the ensemble. Complementarity is no longer systematic but situational, the fruit of subtle learning of respective territories.

This time, 1+1 makes much more than 2. But only if you know when to do the addition.

This personal transformation of Cerise raises a broader question: if medical expertise is being redefined this way, how do we prepare future doctors for this new paradigm? How do we teach not just knowledge, but the art of intelligent collaboration?

Reinventing training: from knowledge to wisdom

Cerise thinks of her niece, in her second year of medical school. Should she still recommend this path? Can one compete with diagnostic AI? Or must one learn to tame it?

Today’s students will be tomorrow’s doctors. But what future are we preparing for them if their knowledge is outdated as soon as they graduate? It’s no longer about learning more, but learning differently. Knowing how to use AI, understanding its biases, critiquing its suggestions, integrating it into a broader human decision.

“Learn to read systems as much as symptoms,” Cerise tells her niece. “Be the one who connects the pieces, not the one who recites by heart.”

The most precious skills will no longer be exclusively biological or pharmacological. They will be strategic, ethical, pedagogical. Knowing how to explain a decision to an anxious patient. Knowing how to integrate social or psychological dimensions into a protocol. Knowing when not to follow AI.

Tomorrow’s doctor could be simultaneously clinician, coordinator, philosopher, and entrepreneur. A demanding profession, but profoundly human. A profession where intelligence is no longer accumulation but orchestration.

This vision of reinvented training ultimately brings us back to the central question: what remains of the doctor when AI performs in all traditional domains of medicine? The answer might well reside not in what disappears, but in what emerges…

A medicine reinvented, not replaced

Ada has finished its synthesis. Cerise takes a few moments to reread everything. It’s not that she doubts the AI, it’s that she knows what isn’t written: the silences of a patient, the fatigue in a look, the hesitations over a word. She checks an option, reformulates advice, adds a touch of caution. Then she looks up. Today, she has done better, not alone, but accompanied.

The medical world isn’t condemned to choose between decline or dispossession. Another path exists, more demanding but more fruitful: that of a learned alliance between human expertise and artificial intelligence. Not a replacement, but a reinforcement. Not substitution, but refocusing.

The future of medicine doesn’t reside in human-machine complementarity, but in learning this complementarity. Cerise, like so many others, hasn’t given up practicing. She has simply changed posture. She’s no longer the sole guardian of medical knowledge, but the enlightened arbiter of an invisible, fluid, continuous consultation.

By transposing the logic of doctor-processes into a virtual device accessible 24/7, by combining collective intelligence, instant synthesis, and clinical freedom, DOCTORIAA doesn’t replace the doctor: it makes them freer, more relevant, more focused on the essential. It’s a compass, not an autopilot. A digital backbone in service of care.

The future of medicine isn’t written in opposition, but in orchestration. Cerise has understood: we’re not witnessing the end of doctors, but their metamorphosis. AI doesn’t come to replace us, it comes to reveal what we had never really known how to do alone: heal with the precision of the machine AND the wisdom of the human.

Tomorrow’s stethoscope will no longer be a simple acoustic tube, but this conscious alliance between our intelligences. And it’s there, in this mastered balance between algorithm and intuition, that the true future of care is drawn: no longer to endure innovation, but to direct it.