Large Language Model (LLM) computer programs should be of interest to you, as they can assist in drafting patient-education materials, such as defining dry eye disease. LLMs can also support staff onboarding and training by generating sample dialogues or educational summaries. The bottom line: LLMs can create efficiencies and free providers to better focus on the human beings they’re caring for.
Here, I define LLMs and provide some caveats to help you make informed decisions about how and when to use them.
LLMs Defined
LLMs use complex algorithms and mathematical functions designed to process and generate text. Some examples of popular LLMs that are Open AI’s are ChatGPT, Google’s Gemini, and Anthropic’s Claude. These computer programs are trained on enormous collections of public material (ie, text, images, and videos).
LLMs identify connections among the data, such as trends and patterns. Regarding the latter, they analyze how and when words typically appear together, learn the context and relationships among them, and use that knowledge to predict what is likely to come next in a sentence.
For example, if you asked an LLM to complete this sentence: “The garden is full of beautiful _______ ,” it would provide the output or response, “flowers.” This is because it knows what a “garden” is, how a “garden” looks, and how people use “garden” in sentences, based on the data it has been trained on. This is how they come up with their answer as the best suggested fit.
LLMs Caveats
While these computer programs can certainly provide practice efficiency and generate value for our practices, it’s important to be aware of the following:
• LLMs do not form opinions or make judgments. Their output reflects patterns in the data they were given. Hence, their answers can sound knowledgeable but may not always be correct.
• LLMs can overlook subtle or context-dependent details that a clinician would naturally catch. This is because their output is only as good as their input. In other words, the more context one provides when interacting with LLMs, the more likely it is the LLM responds with an accurate output.
• LLMs are not trained or ex-perienced health care providers. Any medically relevant statement produced by an LLM should be verified against trusted clinical sources. In fact, due to inconsistencies in answers, OpenAI recently announced that its LLM (ChatGPT) will no longer answer medical prompts for personalized advice.
Understanding Appropriate Utilization
Optometrists who understand the strengths and limitations of LLMs are better positioned to benefit from them. By viewing them as clinical support vs sources of guidance and truth, ODs can use them safely and effectively. OM


