Article Date: 11/1/2011

In Search of Something New
dry eye

In Search of Something New

How can we tell whether a new test or treatment will make a difference?

Kelly Nichols, O.D., M.P.H., Ph.D.

What does it take to make something new? We can argue about this, but I think it can be boiled down to three things: inspiration, motivation, and persistence. Oh, and a little extra cash flow helps. Realistically, how often do any of us create something new — a new method of doing something, or a new “something”?

Out with the old?

We are all faced with decisions daily regarding something “new.” The TV constantly bombards us with advertisements for new therapeutics (direct-to-consumer marketing), electronics (iPhone — bet you want a new one, too), toys, food, you name it.

Does that mean we should always get rid of the old in favor of the new? Not necessarily. In the field of dry eye, for example, circa mid-to-late 1980s, the concept of meibomian gland involvement in dry eye was “new.” As recently as 2007, a large number of clinicians and scientists believed the leading mechanism involved in dry eye was inflammation, thus meibomian gland disease (MGD) was less “new.” And now, MGD is experiencing a resurgence of sorts and has been coined perhaps the largest contributor to dry eye disease — “new” once again.

The “new” view

There are opposing points of view relative to “new.” First, “new” can be better, but sometimes “new” is only ever so slightly better (or different) than the predecessor. It then becomes our job to assess whether the “new” concept/drug/equipment is worth making a practice change. Take, for example, running shoes or automobiles. The new model may be the same as the old, with perhaps a change in color. Are you really going to pay that much more to have green vs. blue stripes on your running shoes?

Some would argue that the color does indeed make a difference. After all, choice is personal preference. On the other hand, others would only buy those with the green stripes if they lead to faster running times, less foot pain, and appeared cooler looking than the blue-striped shoes.

Understanding the “new”

In all seriousness, everyday we are faced with opportunities to change our practice patterns by adding something “new” based on medical information regarding instruments and therapeutics. How can we tell whether the new test or treatment will make a difference to our patients or the bottom line in our practices? Here are four ideas:

Listen to industry reps in exhibit halls and when they come to your office. Collect the information they provide, and take time after he/she is gone to consider how strong the evidence is in favor of making a practice change.

Listen to your colleagues at professional meetings and continuing education events. After the lecture, follow up with colleagues in the exhibit hall and share “what works for you” questions and discuss.

Run your own mini “clinical trial.” Try the new therapy or instrument on patients in your practice. Systematically attempt to keep track of the outcomes in comparison with your previous treatment algorithm or device protocol.

Read the literature. Start with the trade journals. If you don't already get them, sign up online or in any exhibit hall to start receiving them. Skim the journal within a week of getting it, and tab pages (or rip out) for further scrutinization. Place the journal or pages into an “evidence folder,” and spend one or two evenings a month going through the folder while at a computer to search for more studies or information on the topic.

What should I look for?

After lectures, I often get asked what elements of a study are important. This is a hard to answer, especially if a practitioner is not that familiar with statistics and epidemiology. But even without a master's or Ph.D. degree in statistics, a clinician can pull critical information from a study and make a decision regarding the quality and impact of the work. Use these steps to evaluate a study:

Evaluate the authors. Are the authors well known in the field? Are they from a reputable location? Specifically look at the first author and the last author — the two most important authors on a manuscript. In many instances, the study lists the senior author last, while a more junior author is first.

Evaluate the sponsor. Let's face it, it takes money to do research. The financial support can come from a variety of sources, including industry, government (e.g. NIH, Veteran's Administration), non-profit organizations (e.g. the Sjögren's Syndrome Foundation), or universities (e.g. start-up funds). Every funding source has the potential to influence the investigator or create bias. However, that does not mean it will.

Good and bad studies and/or bias, can occur regardless of where the money comes from for the study. The important aspect is disclosure. Statements such as “unrestricted grant” indicate the sponsor did not solely direct the project. If company employees are listed as authors, there likely was more industry involvement in the project. Still, it does not mean the science is any less sound. That aspect needs to be evaluated separately and is not necessarily linked to “who paid for it.”

Evaluate the journal. The decision of where to publish a manuscript is not that complicated. In short, the authors submit the manuscript to the highest ranked journal appropriate to the article's content. If it gets rejected, the authors move down the list until it gets published. If the article is published in a journal you have never heard of, there may be a scientific flaw. Alternately, if the journal is from another field (e.g. proteomics) it could be highly rated in that field. ISI journal rankings are published annually and are based on the number of times articles from that journal are cited in other published manuscripts.

Evaluate the science. Regardless of who did it, who paid for it and where it is published, the science always speaks for itself. What are the important parts? While this is not a perfect approach by any stretch, you can glean important information from the scientific abstract of a published paper, and this is usually of higher quality than the summary of the article/abstract that you might read in a trade journal or online because it is in the authors' own words.

Also, most clinicians do not have access to the journals where eye research is published. You can always e-mail the communicating author (usually the most senior author) and ask for a PDF of the manuscript. In reviewing an abstract assess:

1. Study design. Ask yourself: Is the study design appropriate to address the stated purpose or goal of the research?

2. Study question. Will the way the study was performed and its outcomes (tests/symptoms) answer the study question? It should make sense clinically.

3. Study logistics. Do you trust the clinic and the doctors to get the study done? (Big names don't always mean the study was done correctly, however.)

4. Study findings. Are the study's claims supported by the data presented? This is very important to making clinical decisions.

5. Study discussion. If you have access to the full paper, this section is of great value because it's where the controversy is generally defined. If you don't agree with the abstract, seek out the full paper, and read the discussion section. Were the flaws identified? Did the authors carefully compare the study findings to the existing literature? A well-written discussion section is usually the best source to aid in determining the clinical value of a study, and thus, the compelling evidence to add something “new” to your practice.

Study design cheat sheet

For those of you who are unfamiliar with study terminology, the following “cheat sheet” of common terms may prove helpful.

Randomization. Patients are enrolled and assigned into groups based on a pre-determined randomization scheme. This reduces selection bias.

Clinical trial. This is a study design in which groups are administered a therapy and followed for efficacy and/or safety.

Controlled. This is when an accepted comparator is used. This can be a placebo or what is perceived as standard of care. For example, the drug vehicle, saline or artificial tears have been used as the control in ocular surface disease studies.

Open-label. This is when the patient knows what treatment he/she is receiving. Usually, the medication is provided in the commercially available form. The patient is also aware that the treatment in question is supposed to help his/her condition. Bias is possible with this study design; however, new insight into a treatment can be gained in the pilot study.

Masking. This is when the patient is unaware of the treatment he/she is receiving. In a double-masked study, the doctor and the patient are unaware of treatment assignments.

Outcomes. This is the pre-selected variables that drive the statistical analysis. For example, the primary outcome in a dry eye trial may be clearance of corneal staining or a five-second improvement in fluorescein tear break-up time. Secondary outcomes are additional variables that are analyzed, such as a seven-unit improvement in OSDI score. In a clinical trial for FDA approval, discussions with the FDA and preliminary data are used to select primary and secondary outcomes for further trials.

Inspiration, motivation and persistence

Let's face it, adding a “new” clinical test or treatment to your practice is creating something “new.” You will be changing your workflow, staff expectations and patient perceptions. So it requires inspiration, motivation and persistence. And some changes, in particular those that require staff training or new assignments, can be challenging but ultimately have a positive result. You may be surprised how your office colleagues respond to a change in favor of improved patient care. Let the evidence be your guide, followed by your gut instinct. And oh, a little extra cash flow helps, which is universal. OM


Optometric Management, Issue: November 2011