In today’s healthcare IT world, there’s a whole lot of talk about interoperability — and unfortunately, there’s also a whole lot of misunderstanding, says Chuck Christian, VP of Technology and Engagement with the Indiana Health Information Exchange. He believes that if the industry wants to make real strides in achieving this Holy Grail, it’s time to start clearing the air.
Recently we spoke with Christian, who has more than 20 years of CIO experience under his belt, about the difference between what’s being reported about interoperability and what’s really happening in the trenches. We also discuss the most common requests IHIE receives from providers (and how they’re working to fulfill them); how his organization is leverage the knowledge of students to de-identify data; the discussions he believes CIOs need to have with vendors; and why, all things considered, he’s still optimistic about the future of healthcare IT.
- Geo-mapping clinical data with SDOH to allocate resources
- “There’s a really good opportunity.”
- Indiana HIMSS’ “datathon”
- Interoperability myths: “Data is being moved in a variety of ways”
- The fax-to-EHR option
- “Is the information coming from a trusted source?”
- Liability concerns: “We need to have those conversations.”
LISTEN NOW USING THE PLAYER BELOW OR CLICK HERE TO SUBSCRIBE TO OUR iTUNES PODCAST FEED
Those are the type of things we should be able to do with this level of data. It doesn’t have to be identified. It can be de-identified, and you start looking at where services need to be, what services need to be offered, where the patient populations are that could benefit from another clinic, and where are we putting our limited dollars.
If you’re going to define it as data flowing unencumbered without special effort from one EMR to the other, then that’s true — it’s not occurring, for a variety of reasons. But it’s not because nobody wants to share the information.
Are the EMRs able to consume everything that we send to them? No, and if you talk to some physicians, they don’t want the data to come in automatically and load in their EMRs and jump into their workflows, because they don’t believe that they need to see everything.
We’re moving all this data around, but we’re not thinking about what kind of potential liability that we may be causing, because the tort reforms are not being modified to go along with this new mass of information.
Those are conversations we’re not having that will make people more cautious about what information is shared and how they use that information.
Gamble: The use of social determinants in health is really interesting, and it’s something that I think every organization is going to want to be able to do, but it’s getting all the steps in place first.
Christian: Right, and I think that we need to learn. Brian Dickson, who is a researcher at the Indiana School of Public Health and also has a joint seat at the Regenstrief Institute, did some work last year geo-mapping the clinical data with some social determinants of health to see if you could identify where the need might be for resources around public health. I think that’s another really good opportunity for us to start looking at these multiple datasets we have and being able to make those determinations.
I’ll give you an example. The Indiana chapter of HIMSS this year was the host chapter for the Midwest conference where all the states in the Midwest get together every year. One of the things that we have in Indiana is the MPH or Management Performance Hub. It just recently got codified by the Indiana State Legislature, which is a department of the Governor’s Office, and they’re gathering data from a variety of state agencies and creating a data repository.
They created 25 different de-identified datasets, and we had a data-thon where we asked the students from the various universities and some of the professional groups in town to take a look at these 25 datasets. They were grouped in two categories: one was data visualization and the other one was data analytics. We asked them to look at the datasets and see what they could determine. It was really interesting just to see what these groups of really bright folks, particularly the students, came up with.
We had two winners. One was a local company that helps employers manage their own employee population around health and they also do clinics for some of the larger employers. They did a data visualization, which I thought was extremely interesting, to help that population of folks get a view of what is happening with healthcare for those populations.
But the other one I found to be equally as interesting is a piece of analytics that a group of graduate students did. They took the opioid population of de-identified data that was presented, overlaid that with where mental health services and treatment services are in the State of Indiana, and they very quickly were able to identify a region of Indiana that was underserved.
Those are the type of things we should be able to do with this level of data. It doesn’t have to be identified. It can be de-identified, and you start looking at where services need to be, what services need to be offered, where the patient populations are that could benefit from another clinic, and where are we putting our limited dollars we have to spend in public health in the right places. I think that’s where the data will help us visualize that and either confirm or reaffirm that we’re in the right spots, or help us shuffle that stuff around.
It really amazes me that with the healthcare resources and the technology resources we have in Indiana, where we rank at the low end of the totem pole for a lot of the health stats in the country. There are some places where we’re at 38 and 41, which is just absolutely insane to me. But in looking at some of the data that Brian found, Indianapolis is a metropolitan area, and I was actually shocked to see that there were food deserts within the metropolitan area of Indianapolis. Part of that is related to the populations that are in those areas that may be elderly and don’t have their own transportation are at most at-risk groups of folks that could be in the lower end of socioeconomic level. Because there’s not a grocery store within a mile of where they live, somebody has to pick them up to take them to the grocery store. And it’s the same thing goes with drugstores. I find it pretty amazing that as many pharmacies that I see, where it looks like they’re dueling on different diagonal street corners, there’ll be places in Indianapolis that you don’t have a pharmacy within a mile or two or three. And so helping us determine where those challenges are around healthcare is a really good way of using that information to improve that care, particularly for the most vulnerable pieces of our society.
Gamble: Right. That’s a really interesting look at the power of data. It’s looking at these other factors, and that’s where I think this is going to get really interesting in the next few years.
Christian: It’s one of the interesting things that we’re able to do here, because we’re pulling in data from about 119 different facilities, and we’re all now getting more and more information from physician practices, post-acute care, and those type of things. It’s not that we’re looking at the data — we don’t. That’s not what our mission is. We curate that information so that it can be made available appropriately for those kinds of studies.
We do provide some services to the Indiana State Department of Health around syndromic surveillance, notifiable conditions and those types of things, and we’re starting to have more conversations with them about how we can help with determining where those resources need to be. But again, we’re just the curators of the data. If there is a need to access information, we’ll take it to the management council and make sure that access is approved.
Gamble: Right. So now I’d like to talk about interoperability — I feel like if I don’t bring it up, I’m not doing my job. The White House recently held the Interoperability Summit. And the goal, as always, is to try to get some answers, but there are some who think that there’s too much focus on trying to figure out what’s working for some organizations and expanding that out. What are your thoughts on what they’re trying to do with these types of events?
Christian: There was a large crowd, I think about 35 people at the White House from a variety of organizations. They had the large corporations there, Microsoft and Google and those folks. Aneesh Chopra was there, and some folks from the Carequality group. My boss, John Kansky, who is the CEO of our organization, was there. He’s on the HIMSS board and on the board of the Sequoia Project. Mariann Yeager was also there, and quite a few other folks that cover the gamut of interoperability. The question for me is I’m not really sure, because I wasn’t involved in any of the conversations, of what they’re trying to accomplish other than what I’ve read in Politico and some of the other trade press.
But one of the things that just gets on my nerves is when I read an article that says interoperability is just not happening. I think that’s not true, for a variety of reasons. It depends on how you’re going to define interoperability. If you’re going to define it as data flowing unencumbered without special effort from one EMR to the other, then that’s true — it’s not occurring, for a variety of reasons. But it’s not because nobody wants to share the information. I’m sure there are some cases of that where people believe their data has strategic value which is more valuable than the impact on patient outcomes, but the fact that the data is being moved in a variety of ways.
I can tell you we handle millions of transactions each month that flow between organizations, and we respond to queries from the eHealth Exchange, the VA and SSA. We deliver millions of clinical messages for our members, to whoever they need to go to. That’s one of the services we provide for them, and we can deliver it in one of four different ways.
And so, are the EMRs able to consume everything that we send to them? No, and if you talk to some physicians, they don’t want the data to come in automatically and load in their EMRs and jump into their workflows, because they don’t believe that they need to see everything.
If they’re a carbon copy or a courtesy copy, because they’re the patients’ primary care and a specialist they got referred to is seeing the patient, the assumption is that the specialist is going to act upon that result, and it’s just a courtesy copy to the primary care physician. So that’s probably not something he wants to drop in his workflow, because he or she did not order it themselves. They want it to be in the patients record for future reference, but as far as them being responsible for acting upon it, if you have all that data flowing into all the physicians, it is possible that more than one physician is going to act upon it, and create confusion about that patient’s care.
So we haven’t gotten to the level of coordination yet that we can make a determination of who’s going to be accountable. In most cases, it’s assumed that the physician who orders the test is going to be accountable for making sure that the patient is treated based upon the test result if it was abnormal.
Gamble: Right, that makes sense.
Christian: To me, interoperability is happening every day. There are some places in the country that are rural where the physician may only have an opportunity to use direct, but that capability is available to them. In a lot of cases — and a lot of people don’t like to hear this — in some of the workflows in some of the physician practices, getting a fax, which comes into a server in their EMR to be reviewed and filtered by their nurse or one of the multi-skilled workers to make a determination if the physician needs to see it or not, that meets their workflow. It works well for them, and they don’t really want to change that, because as they say, ‘If it ain’t broke, don’t fix it.’
From an interoperability standpoint, do we have some way to go? Absolutely, and we will, up until the point that we can get more of this data into the workflow of the physician so that they don’t have to go look for it, they don’t have to go hunt it, and they can trust it.
And that’s a key thing we don’t talk a lot about. Is the information that is moving from a trusted source? Can the physician or the clinician determine where that data came from in case they need to pick up the phone and call somebody? Is it the radiologist that read the chest x-ray or the CT scan of the head? They need to be able to trust that the information came from a source that they don’t have to worry about if they’re going to incorporate it into the EMR and have it become part of the patient’s record.
The other thing that we don’t talk a lot — and some physicians are concerned about it — is some of the liability that goes along with it. Because we’re moving all this data around, but we’re not thinking about what kind of potential liability that we may be causing, because the tort reforms are not being modified to go along with this new mass of information.
I had a conversation with a friend of mine who is CIO of another facility. Their medical staff is now starting to ask about the information they’re getting electronically from other places that is being incorporated into our medical record. They’re saying, ‘Do we need to act upon that if we see something that is not really of interest, but is going to have an impact upon the patient’s clinical outcome? And if we don’t know it’s there, even though it’s in the record and it may very well be discoverable, what’s our risk?’
Those are conversations we’re not having that will make people more cautious about what information is shared and how they use that information. So I think we need to have those conversations as well. I know the AMA has addressed part of it and so has AHIMA, and we just need to resurrect those conversations and dust them off a little bit, and make sure they’re still valid from the paper world into the electronic world.
Gamble: And once you have those discussions, it has to be ironed out more specifically who is responsible for the data, for the accuracy, for what’s entered into there, and where does that responsibility lie. That’s big.
Christian: Absolutely. The thing about it is, you can create as much confusion with no one acting upon something as you can with more than one person acting upon something, and so we just need to figure that out. One of the things we’ve been able to do with high efficiency, and this goes back to my early computing days, is, if you want to create repeatable errors, have a computer do it.’ We need to make sure we’re doing the right thing every time.
Gamble: Easier said than done.
Christian: Absolutely, but we’ll see where it goes.
Chapter 3 Coming Soon…