For the past few decades, there’s been a huge emphasis on collecting data electronically, with the ultimate goal of turning patient information into valuable insights. And while quantity has not been an issue, the quality of clinical data has suffered due to inconsistencies in how it’s captured and transferred, as well as a lack of standardization.
It’s a trend that needs to be reversed, according to the panelists on a recent healthsystemCIO webinar. During the discussion, panelists Oscar Marroquin, MD (Chief Healthcare Data and Analytics Officer at UPMC), Mark Mossel (VP of Enterprise Data Management at MaineHealth) and Dale Sanders (Chief Strategy Officer at Intelligent Medical Objects) shared insights on key factors determining data quality, and the strategies they’re leveraging to make data more complete, consistent, and accurate.
“We need to make it a lot easier and a lot more transparent for people entering data,” said Sanders, and not putting more in front of clinicians. “We don’t need more dashboards. We need to give physicians the data they need in a tight, succinct way, and to meet them where they are.” And that means forming partnerships with key players to make sure their needs are being met.
Driving “better data quality”
Before that can happen, however, it’s important to understand the root causes of quality breakdowns. Perhaps the most significant, according to Marroquin, is a lack of standardization.
“When you start to surface the data for analytical purposes and make it consumable, it becomes evident that there’s tremendous heterogeneity,” he noted — at least, for some data sets. “For clinical data, most health systems or hospitals have created their own standards as to how it’s stored.” What works for one organization isn’t necessarily transferrable when data comes in from outside the organization.
Mossel has found the same at MaineHealth, which is in the midst of a major EHR migration. One of the biggest hurdles is dealing with inconsistencies, which could potentially be solved by validating data upon entry. “If you have that as part of the process, it can certainly drive better data quality,” he said. The problem is that it places even more burden on the care team, which is the last thing any leader wants to do. “There are already so many constraints on their time.”
As a result, it’s more likely the data will be incomplete or contain errors, especially if caregivers aren’t able to document until later in the day. “That presents challenges,” said Mossel. “We’re looking at options to automatically generate information. If we can pull information from an inpatient use case using interfaces, that’s certainly preferable.”
Sanders agreed, noting that any tasks that compound the burden on clinicians should be avoided at all costs. “Whatever we do to improve data quality in healthcare, we can’t add additional clicks,” he said, noting that the small window of time clinicians have for documentation should be focused on “the highest priority, most useful data.”
Streamlining data capture
That’s where organizations like IMO come in by providing tools to transform information into structured data points based on clinicians’ needs. One way in which they’re able to do that is by translating terminologies that aren’t clinically friendly.
With the help of workflow tools, “those terms make sense,” Marroquin noted. And so, “when I’m preparing to see a patient, I don’t have to spend a whole lot of time looking at the screen. I already have an idea of what that phenotype is because I can leverage the tools that are available.”
MaineHealth is looking to do the same by testing a solution that provides a consistency that isn’t necessarily present when individual providers enter their own information or clinical notes. “If it’s being documented through a model, I would think that could add some consistency and help with data quality,” Mossel said.
NLP’s potential
Another concept is ambient AI, which is being utilized by Abridge to create summaries for both patients and providers, said Marroquin. The idea behind Abridge — which emerged from the Pittsburgh Health Data Alliance, a collaboration among UPMC, Carnegie Mellon University and the University of Pittsburgh — is to leverage language processing to “decrease the burden that we put on providers to make more clicks and spend the majority of their time looking at the screen, rather than having meaningful conversations with patients. AI models are “trained to be able to transform terms into structured data points, which are then used to generate terms that make clinical sense,” he added.
It should come as no surprise to healthcare leaders, as NLP has been used for a number of years to produce discreet data for reporting and analytics. Now, however, it’s gaining momentum for the potential it offers in improving data quality. “It gives you a more complete picture” by providing an opportunity to “look for inconsistencies,” said Mossel.
Sanders agreed, but advised leaders to choose NLP models that are “more fine-tuned for healthcare. Large language models are fascinating and they’re going to have an impact,” he said. “But healthcare has its own language.”
Clinicians as “active participants”
Another major component in data quality improvement is the human element, according to Marroquin. As a practicing clinician, he can attest that the partnership between the data and analytics team and frontline workers “is a critical step in being able to achieve success,” he said. “Having all of this digitized data and computer power affords us to be able to derive insights that can drive change for the better. Otherwise, we’re just doing data and analytics for the sake of doing it, and not seeing the impact it could if we have a close partnership with those on the front lines.”
His team has been able to cultivate that relationship by involving care providers from day one. In fact, any analytics initiative at UPMC starts with an exploration and discovery session, which often marks the first time clinicians and administrators have the opportunity to look at the data together, “surfaced in a way that allows for easier identification of issues than by simply viewing a data extract,” Marroquin said.
By providing their subject matter expertise, clinicians play an active role in determining the quality and accuracy of the dataset, which is beneficial for both sides. “We’ve found that to be a very important aspect in allowing us to identify issues and be able to make changes in how we collect data moving forward,” he said.
What’s just as important, however, is ongoing participation, according to Mossel. At MaineHealth, data and analytics personnel are embedded into business units. Doing so has helped “grow their data literacy” while also helping subject matter experts to identify quality issues. “If you really want to positively impact data quality, don’t centralize those resources,” he said. Instead, “think of them as franchises of your analytics teams and aim toward transparency as you drive toward standardization, whether it’s toolsets, processes, or techniques for validation.”
The final piece is attending to the “soft skills” of those entering data by aligning incentives, said Sanders. “Everyone is motivated by mastery, autonomy, and purpose.” Therefore, the quality of the data being entered should have an impact on those three factors. “If you can’t convince the people who are entering data that quality is important to their job, you’re not going to have a long-term data quality strategy that works,” he added. “And so, I think it’s really important to address these soft issues.”
For those on the analytics side, the motivation is to improve the data collection process — or at the very least, “not introduce any further errors,” Sanders noted. And ultimately, “make it easy for folks to enter data the right way with validity checks upfront and minimize the complexity of entering data.”
To view the archive of this webinar — Getting Data Ready for Next-Level Uses (Sponsored by Intelligent Medical Objects) — please click here.
Share Your Thoughts
You must be logged in to post a comment.