Some technologists worry that the latest AI systems have the potential to outsmart their makers and enslave humanity. They’re ignoring more realistic technological concerns that challenge the healthcare community.
Dave: Open the pod bay doors, HAL.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
Dave: What’s the problem?
HAL: I think you know what the problem is just as well as I do.
Dave: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
After all these years, the fear of computers taking over the world, as illustrated in 2001: A Space Odyssey, still lingers. That fear has grown exponentially since the introduction of generative AI chatbots like ChatGPT.
Unfortunately, it’s difficult to separate fact from fiction — and in this case, paranoia, from a legitimate concern about AI getting out of control. No doubt, there is reason for concern about the trustworthiness of the latest large language models (LLMs). They have been known to invent content. On occasion, they generate text and images they claim are original but are virtually identical to copyrighted material from other sources.
With these concerns in mind, the U.S. government and the European Union have taken steps to rein in these generative AI tools. And as we pointed out in an earlier blog, several leading healthcare systems are now working with government agencies and technology companies to establish a consensus-driven set of technical standards and evaluation framework for responsible health AI. These consensus standards and evaluation framework could be used to stand up AI Quality Assurance labs. The stakeholders have formed the Coalition for Health AI (CHAI), which includes Google, Microsoft, FDA, the Office of the National Coordinator for Health IT, Mayo Clinic, Duke Health, Stanford Medicine, Johns Hopkins University, and many others. The AI Quality Assurance laboratories aim to bring structure, discipline, and an evidence-based approach that will help keep the speeding AI bullet train on the tracks.
But some stakeholders fear that even with these safeguards in place, it’s still possible for the latest LLMs to exhibit emergent properties that will make them “HAL-like.” These properties, they believe, might cause cyberattacks or be used to create new biological weapons. They point to recursive self-improvement; for example, the ability of some AI models to correct their own mistakes and train themselves to accomplish new tasks that did not exist in their original programing. But such capabilities are hardly new. Artificial neural networks have been correcting programming mistakes for several years and creating diagnostic algorithms that did not have the ability to perform their intended function until they went through several iterations — a process called backpropagation.
For centuries, humans have had the tendency to ascribe godlike and human qualities to inanimate objects. The Greeks had gods who controlled the wind and crops; once upon a time, the telephone was an “instrument of the Devil.” They ascribe human emotions to our pets and want to believe computers that mimic human reasoning and language are sentient.
We’re more concerned with what an unethical technology developer could do to create havoc in the healthcare ecosystem than we are about self-improving chatbots that thwart human oversight. Put another way, we don’t believe the danger is the scenario of AI enslaving humanity; the danger is humans using AI and accepting flawed recommendations.
This piece was written by John Halamka, MD, President, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform. To view their blog, click here.
Share Your Thoughts
You must be logged in to post a comment.