Health Affairs recently published an interesting article that may have gone unnoticed by many in the industry. The article, entitled “Mixed Results in the Safety Performance of Computerized Physician Order Entry”, was authored by Jane Metzger and Dr. David Classen of CSC, Dr. David Bates and Stuart Lipsitz, of Partners Healthcare, and Emily Welebob, a nurse informaticist. It appeared in the April 10, 2010 issue.
As the industry embarks on its journey in search of the holy grail of “meaningful use”, CPOE has arguably emerged as one of the most critical, yet, thorniest, of the criteria hospitals must meet to qualify for HITECH incentive payments. But those in DC who framed the criteria and those positioning themselves to qualify for payment know, at the core, that this is an issue of patient safety and care quality, not money.
David Bates is one of the most prolific CPOE researchers and tireless advocates in the industry. He first cited the importance of incorporating clinical decision support (CDS) into the medication ordering process in his landmark 2008 study “Saving Lives, Saving Money: The Imperative for Computerized Physician Order Entry in Massachusetts Hospitals”. That study, conducted at several Massachusetts hospitals, found that “the average baseline rate of preventable adverse drug events was 10.4 percent.” The study was a major influence on legislation passed in Massachusetts that mandates CPOE in Massachusetts hospitals by 2012 as a condition of continued licensure.
The Health Affairs article summarized the results of a simulation study conducted at 62 US hospitals between April and August of 2008. The methodology consisted of the entry of a simulated set of medication orders designed to assess the ability of the hospitals’ operational medication order entry systems to successfully detect inappropriate medication orders entered for a set of test patients whose attributes (such as age, sex, weight, allergies, and other active medications) were likely to trigger an adverse drug event. In most cases, the hypothetical test orders were derived from prior research into actual, “real-life” ADEs that caused serious harm to a patient.
The report took, as its baseline, the Leapfrog Group’s standards that at least 75% of orders should be entered using CPOE and that the CDS rules embedded in the CPOE functions should be able to avert at least 50% of “common, serious prescribing errors.” The results were troubling, indeed, and pointed out the need for improvements in both software design and implementation.
Across the 62 individual hospitals, detection of the potential consequences of the orders ranged from a low of 10% to a high of 82%. The scores for the top 10% of hospitals ranged from 71% to 82%, while the scores for the bottom 10% ranged from 10% to 18%. The overall median detection score was only 44%. The study further divided the orders into two broad categories: those whose detection was dependent on basic CDS rules – essentially those readily available “out of the box” — and those that required more sophisticated configuration or customization of the decision support logic. There was a pronounced difference. For those judged as likely to be detectable with “basic” capabilities, the median detection score was 62%; for those in the “advanced” category, the median detection score was only 19%.
While drug-allergy interactions were detected 77% of the time, results for other, less trivial but still basic contraindications, were not as positive. Inappropriate single dose orders were detected only 38% of the time. Both therapeutic duplication and drug-drug interactions were detected only 44% of the time. In the more sophisticated category, detection rates were even lower. For example, inappropriate cumulative daily dosing was detected 29% of the time; inappropriate dosing based on patient weight, 28% of the time; age contraindications, only 8% of the time and drug-diagnosis contraindication only 9%. Corollary orders, such as laboratory testing, that should be automatically generated to monitor patient reaction to a drug regimen were triggered in only 20% of the test cases.
The 62 hospitals studied used a total of 8 different CPOE products. None were identified by name. Seven were commercial products; one was internally developed. Given the authors and their affiliations, it is quite possible that the latter was the CPOE system at Partners. Commercial products exhibited a wide variation in their detection scores. The best product score ranged from 35% to 80% across all orders simulated. Two products, installed in only one sample site each, scored 82% and 50%, respectively. The scores for the remaining five ranged from 20% to 78%, 30% to 80%, 30% to 70%, and 18% to 58%, while the one with the widest range of variability detected a low of 10% and a high of 75%. Although some of the variability in the study’s findings could be attributable to variations in the capabilities inherent in the various commercial products represented, the study authors concluded that this alone was unlikely to account for the wide range in detection scores. The more likely explanation was in the manner in which the seven commercial products were configured and implemented.
So what’s the key takeaway here? I think it’s that, as the saying goes, “the devil is in the details”. Buying a CPOE product is just the start of a long journey; one in which the journey is the destination – and one that should really never end. To steal a lyric from Eric Clapton: “it’s in the way that you use it.” Food for thought, don’t you think?