John Halamka, MD, President, Mayo Clinic Platform
Perhaps it’s time to stop referring to AI as a ‘buzzword.’ Yes, talk about artificial intelligence consistently generates a great deal of interest — particularly at events like HIMSS and ViVE, but in this case, the excitement is real, said John Halamka, MD, president of Mayo Clinic Platform.
“AI is going to provide care pathways, care navigation, and triage to an environment that has fewer practicing clinical and health professionals because of retirements, resignations, and realignments,” he said during a recent HIT Policy Update (which also featured Jay Sultan, VP of Healthcare Strategy at LexisNexis). “These last couple of years have caused such stress, anxiety, burnout, and loss of satisfaction that they’re leaving the profession.”
As a result, he expects to see an increased reliance on AI — not to replace providers, but to enable them to “see the right patient, with the right acuity, in the right setting, at the right time,” which can go a long way toward increasing satisfaction. “We’re going to do some great things as we work on AI.”
Training AI models
Before that can happen, however, there are hurdles that need to be cleared. “We’re going to have to educate people as to when they should believe AI and how they should use it appropriately in their practice,” said Halamka, who believes a “standard set of practices” should be established.
A common issue with AI is the fact that the same algorithms which have proven successful in one patient population can’t always be applied to another. For example, an algorithm for predicting pulmonary hypertension among patients in Minnesota may not be effective for those living in New Jersey. “We need to train our AI models on more data that is geographically diverse and includes social determinants of health,” which requires getting access to larger data sets.
And although organizations like Truveta hope to make headway by structuring clinical data into a FHIR model, there are still limitations, Halamka noted. “Maybe you can send 50 variables of deidentified data, but you probably can’t do it for the entire corpus of data that exist in the EHR from birth to death. It’s hard to put everything in a central unified data model.” There are also questions surrounding privacy and consent, especially when it involves behavioral health information.
Opening the datasets
One option is a distributed data model, where the data remains behind a firewall at the institution that generated it. Mayo, which recently began using this method, utilizes a zero-trust cryptographic approach, which means the organization has an algorithm and operates a distributed data network, but the data and the algorithm actually never leave the firewall. This, he said, allows the data and AI to interact without posing risk to either patient privacy or the intellectual property of the algorithm. “We’re going to see a lot more AI. A lot of it is going to be bad AI, and there’s going to be a lot of pressure to open up data sets that are more heterogeneous and representative of a local population and provide continuous monitoring if there are shifts in the data that make the algorithm less useful.”
Like any challenge in healthcare, it’s going to require collaboration. Fortunately, it’s already happening through the Artificial Intelligence Industry Innovation Coalition, which includes experts from Intermountain Healthcare, Novant Health, and the Brookings Institution, among other organizations. The goal, according to Halamka, is to create a national standard for the evaluation and labeling of algorithms that provides information on sensitivity, specificity, and positive predictive value for specific patient groups. “Having that metadata about algorithms exposed can help provide the right decision support that you need for the patient in front of you.”
Identifying inequities
Jay Sultan, VP of Healthcare Strategy, LexisNexis Risk Solutions
For this to happen, however, it’s vital that the data being utilized are representative of the patients being seen, noted Sultan. “The old saying ‘healthcare is local’ is absolutely true. In a lot of the datasets that you wouldn’t think are necessarily geographically focused, there are all kinds of biases.” Understanding the source of the data is becoming increasingly important, particularly when dealing with data involving race, ethnicity, or LGBTQ status. “We have to be super transparent with information and very aware of biases that can come from how and why the data were collected,” he said. “You have to have good testing of data models with a view of factors that can cause bias.”
To that end, Halamka believes healthcare will see more policy focused on how algorithms are deployed, and more transparency around processes. “People aren’t looking to buy an algorithm; they want a service,” he said. “I think we’re going to see a marketplace for AI where provider organizations can go to purchase solutions to clinical problems, workflow challenges, and quality issues, with ongoing monitoring and oversight,” he said. “We’re going to see more organizations produce standards for evaluating them, secure mechanisms for bringing data and algorithms together in an IP and data-preserving fashion, and the decision support needed to solve a clinical staffing or other workflow problem.”
As this is happening, however, it’s critical that CIOs and other leaders understand that the majority of software and services already have AI models baked in, noted Sultan. “You want to be a smart shopper and recognize that part of your diligence is to investigate the machine learning and AI inside of those solutions. You need to ask how confident you are that they did a proper job of training and testing so that you’re not introducing bias into the process.”
To view the archive of this webinar — Health IT Policy Update (Sponsored by LexisNexis) — please click here.
Share Your Thoughts
You must be logged in to post a comment.