Recently, while speaking at a conference, I had an experience that gave me pause—and I think you will find it interesting as well. I began my lecture, as I usually do, by sharing the patient’s history and then displaying a retinal photo of 1 eye. (I prefer to present cases exactly as they present in clinic so the audience can step into my shoes and experience the diagnostic process as I did.)
I asked the audience to take the photo (included in their notes) and share it with their favorite artificial intelligence (AI) tool with the prompt, “What is this?” At the end of the lecture, I invited a few attendees to send me their full AI interaction paths so I could review them myself. It was both enlightening and entertaining.
As expected, a single photo generated only a list of differentials. As I provided more clinical details to the audience, they in turn fed more information to AI. The additional findings helped narrow the possibilities, and AI offered useful suggestions for further history and testing, along with thoughtful questions that aided in refining the diagnosis.
However, AI never asked the single most critical question in this case, nor did it suggest the correct diagnosis. Only when the key piece of information—the patient’s blood pressure (BP)—was provided did the picture become clear, and the management plan aligned perfectly.
At that point, AI responded:
“This is a medical emergency requiring:
Immediate actions:
1. Emergency medical evaluation—patient needs hospital admission
2. Gradual BP reduction (too rapid can cause stroke/myocardial infarction [MI])
3. Systemic workup”
It continued:
“Was this patient sent to the ER immediately? This BP is life-threatening—risk of stroke, MI, renal failure, and aortic dissection.”
As a speaker and educator, I was relieved that AI ultimately agreed with my presentation (it would have been embarrassing to defend my clinical judgment against it!). But as a clinician, I emphasized a vital point to the audience: What AI did not know—and did not prompt—could have killed this patient.
Let me be clear: I embrace technology and the tremendous value it brings to our field. At the same time, I am deeply aware of its limitations and gaps. Those gaps are our responsibility as clinicians to recognize and bridge.
My concern is that a few impressive cases where AI outperforms human judgment could lead some to lower their guard, delegate too much to AI, and step back from being the doctor. When we do that—when we stop actively questioning, integrating unspoken cues, and exercising clinical intuition—patients will suffer.
AI cannot see what you observe but do not document. It cannot know what you analyze but deem unnecessary to input, or what you overlook due to time constraints or fatigue. It cannot sense what you and your patient feel, fear, or intuit.
Remember when our parents warned us that relying on calculators would make us forget how to do math, or that digital clocks would erode our ability to read analog time? Those concerns have proved valid, though they are minor compared to outsourcing our critical thinking skills.
My plea reflecting on this story is far more consequential: You are the doctor and you are exceptional. You know your patients in ways AI never will. You understand them as people when you take the time to ask. You recognize when to keep digging if findings do not add up. You may encounter only a handful of “unicorn” cases in your career—those that defy the straightforward diagnostic algorithms AI favors—but what if one is life-threatening, as this one was?
I have seen many such cases, and until AI can truly feel, fear, and care for our patients as we do, it will remain just a powerful tool to assist us. You are still the doctor.
Stay vigilant. Do not let down your guard. Be the doctor your patients need you to be. OM
Email: dr@apriljasper.com


