As AI rapidly evolves, it has begun transforming the healthcare sector, driving improvements in patient care, operational efficiency, and staff well-being. In a recent panel discussion at the CHIME Fall Forum, industry experts explored the benefits, challenges, and strategic approaches to integrating AI within health systems.
The panel included Jackie Rice, RN, CIO at Frederick Health; Joshua Wilda, Chief Digital and Information Officer at University of Michigan Health-West; Linda Stevenson, CIO at Fisher-Titus Health; and Lance Owens, DO, CMIO at University of Michigan Health-West.
Setting a Clear Mission for AI Adoption
AI technology is becoming integral to healthcare operations, enabling providers to focus more on patient care and less on administrative tasks. For Joshua Wilda, ensuring that AI serves the mission of patient-centered care is essential. Wilda described how at University of Michigan Health-West, they approach AI as an enabler rather than a replacement for the human touch in healthcare.
“Our mission has always been, how do we wrap our digital arms around the patient? AI should allow us to focus on what we’re here to do—deliver patient care.” Wilda pointed out that early AI implementations in his health system, such as ambient voice technology, are freeing up clinicians to spend more time directly interacting with patients rather than focusing on documentation.
Wilda’s approach reflects a growing trend across healthcare: using AI not as a standalone solution but as a tool integrated seamlessly into workflows to make operations more patient-focused. For IT leaders, this means working closely with stakeholders, from clinical staff to executive leadership, to ensure that AI initiatives are clearly aligned with the health system’s strategic objectives.
Implementing AI Through Incremental, Purposeful Steps
Linda Stevenson from Fisher-Titus Health shared that her approach to AI adoption is both practical and gradual. Located in a rural community with a smaller budget compared to larger systems, Fisher-Titus must prioritize the use of AI tools that can deliver a clear return on investment. “We’re a 100-bed hospital, so resources are limited,” Stevenson said, “but we’ve adopted AI for administrative processes that impact revenue and efficiency, like automating billing and coding.”
Fisher-Titus has also implemented Microsoft’s Copilot at the leadership level, where the tool is being tested to understand its potential before wider deployment. Stevenson emphasized the importance of educating staff at all levels, noting, “We’re taking a phased approach to ensure everyone—from leadership to frontline workers—understands the technology and its implications.”
Stevenson’s strategy of incremental implementation can be particularly effective. By piloting AI in targeted areas, systems can assess results, gather feedback, and make necessary adjustments, all while building an organizational foundation for broader AI adoption.
Addressing Data Governance and Security Concerns
While AI offers exciting possibilities, implementing it in a way that safeguards data integrity and patient trust is paramount. At Frederick Health, Jackie Rice emphasized the importance of having strong governance structures before rolling out AI applications. With over 250 beds in her system, Rice said they’ve implemented policies and procedures to ensure data security, equity, and transparency, noting, “We’re very excited about what AI can do for our providers and those caring for them, but we’re also cautious about things like data integrity and cybersecurity.”
For Rice, the first step was establishing a governance structure that includes oversight from a diverse team, including IT, legal, and clinical representatives. She said, “We developed a committee that includes our security and privacy officers, clinicians, and even ethics team members to evaluate AI products. We need to understand what data each AI product is analyzing so that we can explain it to our stakeholders and, ultimately, to our patients.” This approach highlights the importance of a cross-functional team in maintaining oversight over AI tools.
For large health systems with vast amounts of patient data, Rice’s approach underscores the necessity of having robust policies that govern AI use and address potential risks. Establishing these protocols early on will ensure AI adoption is sustainable, scalable, and secure.
Building Patient Trust and Staff Confidence
Transparency and education are essential components of successful AI implementation. Both patients and staff need to understand what AI is and isn’t capable of—and where its limitations lie. Lance Owens from University of Michigan Health-West spoke about the importance of user engagement and trust-building. “We’ve achieved about 95% adoption of ambient voice technology in our primary care departments, and we’re seeing reduced burnout and higher job satisfaction,” he explained. This success, he shared, comes from setting a collaborative and supportive tone around AI, allowing staff to engage with the tools and make necessary adjustments based on feedback.
For Owens, the implementation of ambient voice technology isn’t just about making life easier for providers; it’s about creating a sense of ownership and confidence among those using the technology. He described a framework he developed with Wilda, using the mnemonic “LEARN IT” to guide AI adoption:
- Leadership Support: Ensure leadership is involved from day one.
- Engagement: Actively engage users early on.
- Adaptation: Continually adapt AI tools to meet user needs.
- Refinement: Refine processes based on feedback and changing requirements.
- Needs Alignment: Ensure AI addresses real problems rather than abstract goals.
- Impact: Track and demonstrate measurable outcomes.
- Trust: Build trust through transparency, particularly around AI’s use and limitations.
Educating Stakeholders and Users for Responsible AI Use
One of the biggest barriers to AI adoption in healthcare is a lack of understanding among users and patients alike. Wilda highlighted the importance of educating everyone involved, from clinical staff to administrative personnel, about how AI can enhance, rather than replace, the human aspects of healthcare. He shared an example of how patients had initially been hesitant about diagnostic AI tools that provided predictive scores on scans. “We had to educate both our patients and our staff that AI is a tool,” Wilda explained. “There’s always a human in the loop to ensure AI’s output is responsibly used.”
Stevenson also spoke about the need for continuous education, sharing that Fisher-Titus Health has implemented an annual training requirement for all employees, aimed at building an understanding of what AI is, what it can do, and how to use it responsibly. “Just like cybersecurity, everyone needs a baseline understanding of AI. We started with leadership and now require training for everyone,” she said.
For IT executives in health systems, the lesson here is clear: An educated workforce and informed patient population are crucial for the success of AI initiatives. Regular training and transparent communication will help to demystify AI and build confidence in its value.
Selecting the Right AI Tools for Long-Term Success
In addition to governance and education, careful selection of AI tools and vendor partnerships is crucial. Rice explained that Frederick Health has added specific questions to its vendor assessments and now requires AI-related disclosures from partners. “We have a security assessment that includes AI-related questions like how the AI was trained and what data it uses,” she noted. This updated assessment helps ensure that AI products align with the organization’s standards, and contract language has been modified to specify vendor responsibilities for notifying the organization about any AI updates.
By carefully vetting AI vendors and setting clear contractual expectations, health systems can prevent costly, time-consuming adjustments down the line. Regular assessments of these tools, along with close partnerships with trustworthy vendors, will help ensure that the AI technology remains effective, equitable, and secure.
The Path Forward: Embrace AI as a Strategic Tool
Healthcare organizations are at a pivotal point where AI can either add tremendous value or introduce significant challenges, depending on its implementation and governance. For IT executives at health systems, it is essential to create a deliberate and transparent framework for AI use that aligns with organizational goals, maintains patient trust, and equips staff to leverage AI effectively.
Reflecting on the future of AI in healthcare, Rice encapsulated the potential of these technologies while acknowledging the need for balance. “It’s another wave of change for us, just like implementing EMRs years ago,” she observed. “We need to evaluate what’s appropriate for our organization, what aligns with our strategy, and above all, work with our business partners to ensure we’re solving the right problems.”
Share Your Thoughts
You must be logged in to post a comment.