For many healthcare stakeholders, the availability of timely and accurate data reporting ranges from marginally useful to untrustworthy to downright useless. Should we be surprised? Disparate source systems, gaps in input, lack of interoperability and normalization pour into warehouses that are rich in data but a quagmire for information.
Two sad stories related to data reporting. The first is that medicine still remains more reactive than proactive. While there are many reasons for this, one is the reimbursement model. Providers who used to get paid based on volume now get paid for quality, with some now soured by a 3 percent penalty for not participating in the EHR incentive program. Artificial intelligence (AI) can keep us healthy, but providers don’t get reimbursed to keep you healthy; they get reimbursed for office visits and hospital admissions.
The second sad story is while advances in medicine are nothing short of phenomenal, we have not been punctilious with preventive medicine. We know the value of low cholesterol screening, cardiac care, and diabetic screening, but even with the help of Watson (not the IBM one) and Crick 65 years ago, we haven’t yet used DNA to the level necessary for disease prevention, although it’s great for solving crimes.
Enter the promise of AI in medicine, teed up and teased out by its current big three commercial platforms: Siri, Alexa, and Tesla. AI has become ubiquitous in nature, touching many aspects of our daily lives, perhaps more than we realize. Using AI for clinical decisions, patient empowerment, population health, and cost reductions is a clinical and economic game changer that will surely save lives and money. Early results are promising and real.
So, what could possibly derail the promise and potential of AI in medicine?
Lack of a clear business model.
Back in the early days (say, 10 years ago) cloud computing was as foggy as AI is today. Clouds were referred to as public, private, hybrid, or community, and vendors were scrambling to sell you all of these. Things came into focus when Infrastructure-as-a-service (Iaas), Software-as-a-service Saas) Platform-as-a-service (PaaS), business models were realized and applied. I see the same thing with AI. Oxford dictionary describes AI as:
The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
I get that. What I don’t get is when AI is mentioned in the same breath as business intelligence, knowledge management, deep learning, machine language and others. It reminds me of the clouds we went through.
I like to keep things simple so here is my definition, or perhaps impression, of AI. I have a thermostat at home. I set it at 70 degrees in the winter. When the house gets colder than 68 degrees, an algorithm has been built to tell my thermostat, without human intervention, tells tell the furnace to fire up. When the thermostat hits 70 degrees, the thermostat then tells the furnace, without human intervention, to shut off. Note the importance of no human intervention. A deep-learning thermostat could be controlled with input from the National Weather Service, grocery stores and gas stations to provide data to make such decisions as shuttering the windows, alerting my sump pump, telling me where to buy bread, gas and water, and well, you name it. This is a clear business model supported by use cases to display knowledge I would have to go to different places to assimilate and retrieve. Again, without human intervention.
Obviously, medicine is much more complicated and restrictive. While AI is making headway in such areas as cost reduction and ED admissions, it’s still in its infancy in healthcare. Priorities and governance must be set as to what types of information are most useful, something that can vary from hospital to hospital. This is something that humans, not machines must decide.
Being on the precipice of the great AI promise and adventure, here are my 4 things that can derail AI in medicine.
There are more questions than answers here, but the reality is clear. AI has its risks as demonstrated by the recent Tesla fatality. Yes, I know, some alerts may have gone out to humans, but what if AI makes a wrong decision and kills a patient? What are the risks of AI prescribing the wrong antibiotic? Who is responsible for AI decisions — the doctor, nurse, EMR? What governance models will AI require? What happens if AI is used for nefarious reasons? What is the future role of healthcare workers in an AI world?
- Fiscal states of hospitals
According to a report from the North Carolina Rural Health Research Program, 83 rural hospitals closed between 2010 and 2018. More are on the way. There are many economic reasons for this, and even though AI has little if anything to do with hospital bankruptcies, it’s tough to turn a profit for hospitals these days, especially rural hospitals. Margins are tight and mistakes can be costly. The point is, many hospitals are unable to invest in the AI dream. Basic building blocks may be lacking, especially if the hospital runs on a legacy or homegrown system. Not all hospitals have data warehouses either. As AI matures, these hospitals will fall further behind more affluent places of care.
Assuming that HIPAA guidelines will also govern AI, how will machine language comply? How much work will it take for AI to keep up with NCQA corrections, clarifications, and policy changes? How about ONC changes to meaningful use and MACRA? How complicated is it to change multiple algorithms already in place or being developed? In short, governance for AI would be a good idea prior to us getting too far ahead of ourselves.
Who really knows what is going on with data under the purview of Google and Facebook — the latter being a company falling victim to unforeseen nefarious actors. If I talk to Alexa, where is that data going? Or worse, going back to my previous example, can someone control access to my thermostat to spy on me?
In closing, a close family member had acute appendicitis last Thanksgiving. Treatment was swift with minimal invasion, but there was some risk. How wonderful will it be when AI predicts appendicitis before it becomes an acute episode? We will get there, but let’s make sure we get there safely and smartly, so that sociological collateral damage won’t derail AI.
A veteran of the health IT industry — having served as CIO at University of Maryland Medical System and associate CIO at Rush University Medical Center — Jaime Parent is now focused on helping military veterans adjust to civilian lives. Parent, a decorated US Air Force Lieutenant Colonel, is Chief Dreaming Officer with The En-Abled Veteran Internship, and on-the-job training concept which fast tracks veterans into career opportunities in IT and health IT fields.
Share Your Thoughts
You must be logged in to post a comment.