In my last blog, I introduced the concept of evidence gathering and the different pace and varying mindset between innovators and early-stage companies, and the scientific community, as well as some reasons why the rigor of quality evidence is preferred over common sense and real-world observation.
This time, I want to explore the pros and cons of different types of clinical research and share some examples from the recent literature that make a big difference in telehealth adoption.
- Observational study. This is the most basic form of gathering evidence. As discussed in my last post, this is great for hypothesis generation but tells us nothing about cause and effect. Closely related are the longitudinal observational study and the before/after study. Any conclusions based on these designs can be contaminated by regression to the mean (a statistical phenomenon that can make natural variation in repeated data look like real change) and learning bias (among others).
- Case-control design. This method is pragmatic and efficient, comparing a group of subjects who experience an intervention with an individually matched group that do not. With access to a database (e.g., an EMR database), one can use various demographic criteria to create a pseudo-control group that is closely matched on as many variables as possible but has never experienced the intervention. This is a reasonable way to gather evidence on digital interventions.
- Double-blind study. This is the gold standard. The classic example comes from the pharmaceutical industry, where a new chemical entity is tested by giving it to half of the subjects and giving a placebo to the other half. As a subject, which “arm” of the study you are assigned to is random, thus arising to the term “randomized, double-blind, placebo-controlled” trial. This is rarely possible with digital interventions because it is not possible to come up with a placebo. The next best thing, and what many consider the pinnacle of evidence in digital health, is the randomization of individuals into groups that compare intervention to regular care.
Beyond that (and beyond the scope of this essay) are matters such as calculating the proper sample size to avoid erroneous conclusions (a small sample can result in an erroneous conclusion that an intervention has an effect or that it does not – both are possible).
So how does all this relate to the real world? Here are a couple of stories that will illustrate.
Is it as good?
If you’ve been around the telehealth world for any time, you’ve heard the age-old conundrum, “Is it as good as in-person?” This has been answered multiple times, and the true answer is, “it depends on the clinical scenario.” But a recent paper from colleagues at Mayo Clinic showed, once again, that video telehealth was of a comparable diagnostic quality to in-person care. This type of care aids in conversations with health system executives, payers, and others, in that it provides objective evidence rather than making an emotional argument.
Another question these days is whether telehealth is additive to in-person care or substitutive. Payers fear it is the former and thus want to limit access. A recent paper in npj Digital Medicine (where I serve as Editor-in-Chief) made the case that adding telehealth services in a primary care setting did not increase utilization.
The behavioral health factor
Perhaps the best example of how evidence can enable clearer heads to prevail involves another recent controversy in the behavioral telehealth world. There have been numerous news stories suggesting the widespread misuse of video telehealth being responsible for over-prescribing of stimulants for ADHD and other controlled substances.
The evidence suggests otherwise. As long ago as 2005, authors extensively examined the evidence and concluded that video-based behavioral health visits were equivalent in quality and outcomes to in-person care. More recently, a study showed that telehealth treatment of patients with opiate use disorder was associated with fewer overdoses than those who did not have access.
To wrap up this segment, our brains are wired in a way that forces us to make associations and jump to conclusions. Sometimes there are unknown variables responsible for the effects we observe. Thus, the main goal of science and evidence gathering is to tease apart these variables and allow us to speak with confidence regarding the associations that are meaningful versus those that are not.
If evidence-based care and best practices for hybrid care delivery intrigue you, I invite you to delve further in two ways. One is to join us at ATA in March 2023, which will feature detailed presentations of some of the latest research in the field. The second is to visit the npj Digital Medicine website (which is open access) and read through papers, comments, perspectives, and editorials.
What do you think are the top research priorities for telehealth in the coming year?
This piece was written by Joseph Kvedar, MD, Senior Advisor of Virtual Care at Mass General Brigham, and Professor of Dermatology at Harvard Medical School. It was originally published on his blog page, Reinventing Healthcare.
Share Your Thoughts
You must be logged in to post a comment.