Between Progress and Prudence
Artificial Intelligence and Medicine
The emergence of artificial intelligence in medical practice is no longer a futuristic scenario, but a reality transforming consultations, diagnoses, and clinical decisions. Its enormous potential, however, coexists with ethical challenges that demand prudence, transparency, and a firm defense of the doctor-patient relationship. A recent conference at the Royal Academy of Medicine and Surgery of Seville addressed the keys to this balance: how to leverage AI without abandoning the most essential aspects of medicine—clinical judgment, moral responsibility, and compassionate patient care.
Last week I had the opportunity to give a lecture at the Royal Academy of Medicine and Surgery in Seville on artificial intelligence and medical ethics, a topic that no longer belongs to the future, but to the immediate present. We doctors perceive it every day: artificial intelligence is entering our consultations, triage systems, image interpretation, and very soon, even deeper areas of clinical practice. And if medicine changes, the doctor-patient relationship inevitably changes as well.
The initial question is simple: what exactly is artificial intelligence? As I mentioned in the presentation, artificial intelligence refers to computerized systems capable of mimicking certain human cognitive behaviors: seeing, reading, learning, or making decisions. In other words, mechanisms that process data to generate information and knowledge—something that, on the surface, resembles wisdom. But this ‘resemblance’ is far from being equivalent.
Because processing thousands of X-rays in a few minutes is not the same as exercising clinical judgment. Judgment involves discernment, experience, sensitivity, and, above all, moral responsibility. As Aristotle pointed out with his concept of nous, knowing and understanding is much more than calculating. And as Heidegger warned, thinking is not simply reasoning: it’s about engaging with reality, not reducing it to an algorithm.
Artificial intelligence systems lack all of that. They don’t think, they don’t judge, and they certainly don’t have a moral conscience. They execute processes, but they cannot respond to an ethical dilemma. Likewise, they are not moral agents: they cannot choose between good and evil. They may be extraordinary instruments, but they are not responsible subjects.
Today, artificial intelligence surpasses humans in many aspects: speed, accuracy, the ability to handle vast amounts of data, and mathematical consistency. It can detect lung nodules with high precision, anticipate and predict risks, design personalized treatments, and automate administrative procedures.
In fact, some algorithms designed by artificial intelligence already function as heuristic support systems, capable of suggesting diagnostic paths based on thousands of previous cases.
However, medicine is not solely technical. If it were, a surgeon could operate simply by following a plan, and an intensivist could manage an ICU based solely on the data provided by monitoring systems. But this is not the case. Medicine is an ethical practice that requires understanding the patient’s personal context, listening, prudence, and empathy. The humanistic practice of medicine cannot be replaced by any mathematical model.
That’s why it’s important to warn about the risks. The first is opacity. Many digital algorithmic systems function as ‘black boxes’: they offer a proposal without explaining how they arrived at it. That’s not ethically correct when a decision affects a person’s life or health.
Another risk of algorithmic medicine is bias: if the data used to train a model is biased (by sex, age, ethnicity, or socioeconomic status), the result will also be biased. Algorithmic inequity is not science fiction; it exists and has already been denounced by several studies.
A third risk is dehumanization. If patients observe that those caring for them are looking more at the screen than into their eyes, we will gradually lose the relational space that constitutes the essence of medicine. The clinical relationship is an emotional palimpsest. Doubts, fears, hopes, and decisions are all inscribed upon it. Erasing it is an irreversible mistake.
But no less important is responsibility.
Who is responsible when the system fails? The doctor who relied on the recommendation? The hospital that implemented it? The company that designed it? Spanish medical ethics has already begun to address these questions. The 2022 Code of Medical Ethics requires transparency, traceability, and ethical oversight in the development and use of artificial intelligence. And it emphasizes: algorithms can be helpful, but they never replace the duty of good medical practice.
In short, artificial intelligence is neither an enemy nor an oracle. It can make medicine more accurate and accessible, as long as we don’t relinquish ethical control to systems, however intelligent they may be, that lack the human element: the understanding of others.
Perhaps the medicine of the future will aspire to technological ataraxia (that tranquility sought by the ancient Greeks), but we shouldn’t confuse serenity with naïveté. Artificial intelligence is a powerful tool. The intelligent thing to do, paradoxically, is to use it prudently.
Jose Maria Dominguez Roldan. Member of the Bioethics Observatory. Institute of Life Sciences. Catholic University of Valencia
Related
Ethics, Politics, and Faith: From Francis to Leo XIV for Life with a Just Peace
Agustín Ortega
21 April, 2026
12 min
The purpose of businesses is not profit
Marketing y Servicios
21 April, 2026
3 min
How to Make a Good Confession: The Secret No One Tells You
P Angel Espinosa de los Monteros
21 April, 2026
3 min
The Paschal Triumph of Christ over All Intelligences
José María Montiu de Nuix
20 April, 2026
3 min
(EN)
(ES)
(IT)


