This is a follow-on to my previous post regarding the safety issues of EHRs. But first, one comment and point of emphasis before I dive into the Top 10 details: Tightly-integrated, monolithic EHR solutions that rely on a single data model are much less prone to safety-risk scenarios than those associated with loosely integrated, best-of-breed HL7-based systems.
That’s my personal observation — no lengthy study required to prove it to me. There are, of course, other drawbacks to purchasing a monolithic EHR from a single vendor, but from a safety standpoint, they are a better choice than buying disparate pieces and trying to glue them together with HL7. The unfortunate downside to HL7 is its fragility — errors are frequent and occur easily — which leads to mismatched patient records, delayed delivery of vital results, and lost delivery of the same. I’m not a fan of message-oriented architectures (MOA) — never have been — but especially so in healthcare where HL7 has made it insidiously easy to hang onto old school, unreliable MOA designs.
Now on to the Top 10 …
My information systems and software development background started in the U.S. Air Force, an environment in which software safety (and security) is an embedded part of the culture. Military weapons control and targeting systems, flight control systems, and intelligence data processing systems are all developed with an inherent sense of safety. That background predisposed me towards an interest in the subject, especially so as I watched software and firmware play a larger and larger role in those areas of our lives that were once controlled by levers, gears and links. I have a gloomy sense of self-satisfaction in knowing that I predicted — and tried on numerous occasions to affect the prevention of — the software system failures that affect many of our computerized voting systems and now seem also to affect our automotive control systems. I find myself in yet another Groundhog Day moment with this topic and EHRs.
As described by Debra Herrmann in “Software Safety and Reliability,” software is safe if …
- It has features and procedures which ensure that it performs predictably under normal and abnormal conditions.
- The likelihood of an undesirable event occurring in the execution of that software is minimized.
- If an undesirable event does occur, the consequences are controlled and contained.
Below are the Top 10 Characteristics of a Safe Software Environment, at the CIO-level, based upon my 20+ years of watching and studying this topic. There are numerous lower-level programming techniques that I might cover later, but these are the top characteristics that a healthcare CIO can and should directly lead. If you were a Software Safety Auditor for ONCHIT, CCHIT, HHS, or the FDA, you could use this checklist as a preliminary assessment of an EHR vendor or healthcare organization undergoing an EHR implementation.
- Does the organization have an internal committee to whom software safety concerns and events can be escalated and reported?
- Is there an Information Systems Safety Event Response process documented and well-known in the culture?
- Is there a classification and metrics system in-use for categorizing and measuring the events and risks associated with software safety? Such as:
- Are software applications, functions, objects, and system interfaces classified and tested according to their potential impact on safety risk scenarios? Such as:
- Data which appears timely and accurate, but is not
- Data that is not posted — incomplete record
- Valid data that is accidentally deleted
- Data posted to the wrong patient record
- Errors in computerized protocols and decision support tools and reports
- Do software developers follow independent validation and verification (IV&V) of their safety-critical software?
- Is there a process of certification for programmers who are authorized to work on safety-critical software?
- Are fault tree and failure modes-effects analysis an embedded element of the software development and software-safety culture?
- Is scenario-based software testing utilized?
- Are abnormal conditions and/or inverse requirements testing performed?
- Is there a more risk-averse software change control and configuration management process in-place for safety-critical software?
In sharing these Top 10 Characteristics, based upon my experience, it is an attempt to contribute to the solution, not just complain about the problem. If you have any thoughts, suggestions, or personal lessons, I’d enjoy hearing from you.
BobColiMD says
In my clinical experience, to avoid unnecessary testing and patient safety errors, physicians must overcome two big problems with patient diagnostic test results.
One, as you noted, is that the status of the test results data quality (completeness and validity) in the EHR is often “the unknown that leaves clinicians not knowing what they don’t know.”
The second, overlooked by both HIT vendors and policy makers to date, is that EHR, PHR and HIE platforms continue to display test results as incomplete, fragmented data using variable reporting formats. This makes it impossible for physicians and patients to view and share it efficiently.
The practical solution is a content exchange standard reporting format that can display the results of all 6,000 different tests as clinically integrated, easy-to-read information on up to 80 percent fewer screens and pages.
Dale Sanders says
I agree completely. The way we display data to our poor physicians and nurses is a crime… our user interfaces on EHRs also contribute to patient safety events. I wouldn’t give a nuclear power operator an EHR-quality user interface, that’s for sure.
I posted a fake blog several months ago about Amazon.com building an EHR… I wish we could lure that type of talent to healthcare IT.