What began as a distraction during my son’s illness matured into a professional obligation to understand this technology fully and to help ensure that clinicians are positioned to use it safely and effectively, writes Prof Ray O’Sullivan

“Life is like a box of chocolates; you never know what you’re going to get.”

It’s not a line that opens a clinical technology column often, but it is the only adequate way to begin this one. Because, when I found myself on paid ‘gardening leave’ from the HSE, I had little idea how radically life was about to pivot. My 21-year-old son had just been diagnosed with severe aplastic anaemia. He required red-cell and platelet transfusions two or three times a week, and in the earliest days of the pandemic, with clinics stripped back and the city shuttered, I spent hour after hour in the car-park of St James’s Hospital while he received his day-long treatments.

The haem-onc teams were exceptional, and after several gruelling years since, he eventually went on to a successful bone-marrow transplant(and music career). But, in those first months — no cafés open, no waiting-rooms accessible, sun shining, I found myself, quite literally, sitting in my car with nothing to do. So, like many others in that era, I turned to online courses. The first one I clicked on, setting my inner nerd free, was Artificial Intelligence in Healthcare from Stanford University.

That moment, taken in a bleak car-park, became the start of many things which have that since followed. Today, I lecture in AI at the Royal College of Surgeons, I have founded AI-leveraging companies, and I use AI daily in clinical practice. This includes non-invasive AI systems analysing the earliest stages of embryonic development for chromosomal abnormalities in our IVF laboratory. What began as a distraction during my son’s illness matured into a professional obligation….to understand this technology fully, and to help ensure that clinicians are positioned to use it safely and effectively.

A brief history of Medical AI
While ideas about ‘thinking machines’ have long preceded digital computing, medical AI formally took shape in the 1970s with early diagnostic systems such as INTERNIST-1 in Pittsburgh, and the rule-based MYCIN project at Stanford. These attempted to codify clinical reasoning and generate differential diagnoses.

Their promise was short-lived. They were brittle, labour-intensive to maintain, and struggled to accommodate the ambiguity of real-world clinical work. The field re-awakened in the 2000s with the arrival of machine-learning, and again in the last decade with deep learning and natural-language processing, allowing models to interpret unstructured data and generate clinically relevant recommendations.

Today, AI extends beyond diagnosis to imaging, risk-stratification, workflow-automation, triage, documentation and personalised medicine. The evidence base continues to grow, and the applications relevant to clinical practice are expanding rapidly.

Why this matters for medical practice in Ireland
Irish healthcare is under sustained pressure: workforce shortages, rising multi-morbidity, and the relentless weight of administrative documentation. AI has the potential, if carefully integrated, to mitigate some of these pressures.

Digital scribes
Generative-AI documentation tools can reduce the time spent on referral letters, follow-up notes, safety-netting instructions and chronic-disease reviews.

Decision-support
Machine-learning models can help identify early deterioration, improve risk-stratification and support preventive medicine.

Diagnostics and imaging
In fertility medicine, non-invasive AI systems now assess embryos and endometrial patterns without biopsy. Comparable tools will inevitably move into general practice diagnostics.

But there is a regulatory reality
Under the European Union Artificial Intelligence Act (Regulation (EU) 2024/1689), AI systems used in healthcare are classified as ‘high-risk’, requiring traceability, transparency, monitoring, explainability and human oversight.

Many commercial AI tools currently marketed to healthcare providers do not meet these requirements. They are black-box systems that are not interpretable, auditable or observable. Under the Act, systems lacking these characteristics cannot be deployed in high-risk settings such as medicine. The same legislation also mandates training and competence requirements for professionals who use AI in high-risk domains — including clinicians. (EU AI Act, Articles 6–7, Annex III)

This is not a bureaucratic footnote. It is now a clinical-safety and medico-legal obligation. Clinicians will be held responsible for the tools they use, even when those tools are algorithmic.

The case for AI literacy in medicine
AI literacy is no longer optional. It is central to patient safety. That is why my work at the Royal College of Surgeons in Ireland, VoxMedical, and in other healthcare settings, focuses on building a baseline understanding of AI among clinicians before widespread deployment happens.

Healthcare staff must know:

what these systems can and cannot do;
how model outputs are generated;
where risks lie; and
how to use AI ethically, transparently, and within regulatory limits.

Without this, we risk repeating the failures of earlier technological revolutions: adoption without understanding, and deployment without governance.

Looking ahead
This column will, over the coming months, explore the realities of AI in Irish healthcare practice: what works, what does not, what is safe, what is hype, and where regulatory boundaries fall. My aim is to provide clear explanations, balanced analysis, and clinically grounded guidance on a fast-moving field.

If a car park in Dublin could launch one clinician into AI advocacy, perhaps this column can help guide the wider profession through the next chapter.

Comments are closed.

Pin