
Sarang Deshpande, VP of Data and Analytics at Franciscan Alliance
Healthcare organizations racing to implement AI solutions are discovering that their greatest obstacle isn’t the technology itself but the quality of the data these systems rely on. According to Sarang Deshpande, VP of Data and Analytics at Franciscan Alliance, the rush to deploy AI has exposed long-standing data quality issues that many health systems have struggled with for years.
Franciscan Alliance operates 11 acute care facilities across Indiana and Illinois, along with a sizable physician network providing ambulatory, primary, and specialty care. Deshpande oversees enterprise analytics, data platforms and engineering, interoperability, AI and automation, and data and AI governance from his position within IS leadership.
The Data Quality Crisis
The healthcare industry has historically struggled with fragmented information systems and inconsistent data definitions, leading to limited trust in data across organizations. Teams spend considerable time reconciling numbers and validating reports instead of focusing on decision support and insights. While these challenges existed before the current AI boom, the introduction of machine learning models has amplified their impact.
Podcast: Play in new window | Download (Duration: 46:56 — 32.2MB)
Subscribe: Apple Podcasts | Spotify
“AI is not going to fix poor data. It actually is exposing more and more of it,” Deshpande said. “The models that are being built now, folks are realizing they’re only as good as the data that they can consume. We typically say garbage in, garbage out. That is the absolute truth with a lot of what’s going on right now.”
Existing issues with data quality, gaps in lineage, and inconsistent definitions immediately surface as unreliable predictions or biased outputs when organizations build AI and machine learning models. The focus is now shifting toward data quality and interoperability as mission-critical capabilities instead of routine IT hygiene issues.
The problem manifests in subtle but significant ways. When organizations attempt to deploy AI agents to interact with patients or automate administrative tasks, the underlying data problems become apparent. However, these issues are not new discoveries. Even before AI implementation, organizations recognized that definitions were mismatched and information could not be consistently trusted.
The concept of “human in the loop” has become essential because stakeholders do not trust AI models to consume clean data and make autonomous decisions. Machines struggle with ambiguity, and the regulatory requirements in healthcare make errors in AI predictions or automation particularly risky. The issues have become more acute because taking humans out of the loop and allowing machines to operate independently is not a viable solution without addressing fundamental data quality problems.
Balancing Innovation and Foundation
Organizations face pressure from multiple directions to implement AI capabilities. Enterprise EMR vendors are pushing AI features as part of their workflows, while operational leaders seek solutions for capacity management, forecasting, and other business processes. IT departments explore opportunities ranging from service desk optimization to contract analysis.
The challenge lies in ensuring these solutions integrate into actual decision-making workflows. Use cases that exist outside the natural flow of work for clinicians, operations leaders, or finance teams tend to fail despite initial enthusiasm. Solutions must be embedded at the point of decision-making to deliver meaningful value.
Deshpande described a hybrid strategy that includes both vendor-provided AI capabilities and custom-built solutions. Some use cases, such as ambient listening for clinical documentation, show strong potential because they do not require training on extensive data sets. These applications can reduce burden on clinical and administrative workflows without the same data quality dependencies that plague other AI implementations.
However, for data science-intensive use cases, organizations must acknowledge that data sources have always been problematic. Automating manual processes or deploying models without addressing source data quality will not solve underlying problems. The knowledge bases, contractual documents, and other information sources that feed AI models have historically suffered from quality issues that automation alone cannot resolve.
Governance as Foundation
Franciscan Alliance has focused intensively on data governance over the past 12 months, treating it as an umbrella term for several interconnected initiatives. The organization is working to establish trust in the data being served, build consensus around definitions and terminology, and create a foundation that can support AI models effectively.
Data governance efforts often fail when they begin with technology or product implementations. Organizations purchase expensive data governance platforms, catalogs, or master data management solutions expecting that “if we build it, they will come.” By the time complex implementations finish, business stakeholders have typically moved on, and fundamental issues persist.
Franciscan Alliance has taken a different approach centered on stewardship. The organization identifies specific data assets, metrics, solutions, or products and assigns business stakeholders willing to take ownership. This ownership extends to defining calculations, validating definitions, and addressing detailed questions about data accuracy.
“For us, it is all about, can we even identify a specific data asset, a specific data metric, a specific data solution or a product, anything, can we identify who’s the business stakeholder who’s willing to own it?” Deshpande explained. “And by ownership, I mean, we ask them, ‘Is this the right definition? Is this the right calculation?’ Down to the nitty-gritty basics.”
The organization is establishing stewardship work groups loosely based on business domains. These groups tackle bite-sized chunks of data assets within domains such as quality, working with existing collaboratives and committees that already discuss related issues. The approach organically builds inventory and catalog capabilities while ensuring searchable data and clear ownership when issues arise.
This process-oriented approach recognizes that data governance is not a technical problem. Organizations must treat data as a shared enterprise asset instead of a byproduct of departmental activities. AI simply raises the stakes by forcing organizations to confront semantic gaps and inconsistencies that might have been tolerable in traditional analytics environments.
Engaging Frontline Workers
A critical component of sustainable data quality involves incorporating frontline workers into the governance process. Data entry largely happens at registration, scheduling, and coding, and these team members must understand the downstream implications of their work.
Franciscan Alliance distinguishes between data custodians and frontline workers. Custodians typically come from corporate teams in quality, revenue cycle, or financial reporting. These individuals serve as stewards who bring subject matter expertise and coordinate definitions. For example, they might reconcile five different variations of length of stay based on how quality teams and finance teams need to view the metric.
Frontline workers represent a different category. They control how data enters the organization in the first place. While some data arrives from external sources such as payer files and cannot be controlled, considerable information still enters manually through the EMR or other systems such as ERP platforms for HR and supply chain.
These frontline team members often lack awareness of downstream implications when fields are not mandated or when they simply press buttons to move past screens. Education becomes essential because downstream efforts cannot compensate for upstream data quality problems. Pushing problems upstream creates more sustainable processes for maintaining data quality.
Organizations are starting to recognize that mandatory fields guarantee data entry completion but not data quality. If required fields do not fit naturally into workflows, users will find ways to comply quickly while sacrificing quality. Workarounds and inaccurate entries persist when team members do not understand how their inputs are used.
Registration clerks entering patient information have no visibility into analytics or AI models that will eventually use their data. When they face lines of waiting patients and need to process eight records in five minutes, expecting them to prioritize data quality without education is unrealistic. Working with leadership and leveraging informatics teams can help ensure documentation consistency aligns with operational workflows.
AI Governance and Organizational Structure
As organizations grapple with AI implementation, questions arise about optimal leadership structures. Some health systems have created chief AI officer positions, while others combine the role with data leadership as chief data and AI officer. The proximity of data and AI makes this combination logical given their interdependence.
Deshpande believes a leader should own AI governance and program management at a centralized level. The scope extends beyond building solutions to encompass legal, security, and privacy implications. Organizations must establish ethical guidelines and maintain inventory of solutions being built, purchased, or acquired through partnerships.
Centralizing execution proves difficult because AI capabilities embed across multiple systems. Application teams already managing EMR implementations can handle AI features within those platforms, while ERP teams can manage related capabilities in their systems. However, governance and coordination should remain standardized and centralized.
This centralized governance function coordinates with legal, privacy, security, and contracting teams to create an ecosystem for evaluating new use cases. Whether requests come from business stakeholders or technology teams, they pass through standardized processes that feed into demand and project management workflows.
“AI should be a centrally governed and program managed program that works with all of the different stakeholders that are part of the execution and implementation of it,” Deshpande said.
The governance function must monitor implementations, decommission solutions that fail to deliver value, and verify that promised financial returns materialize. The entire lifecycle of AI solutions requires management that places program governance at the forefront.
Platform Strategy and Point Solutions
The evolution from best-of-breed approaches toward consolidated platforms reflects broader industry trends. Fifteen years ago, organizations embraced multiple specialized systems, but the resulting data silos created significant challenges. Today, most health systems rely on a handful of major platforms, typically including a primary EMR, primary ERP system, and project management and content management systems.
Franciscan Alliance has made a conscious effort to maximize capabilities from standard enterprise systems. The organization prioritizes adoption of features from these major vendors, especially when available without additional cost. Integration complexity and ongoing management challenges make third-party point solutions less attractive.
However, large enterprise vendors do not always bring solutions to market quickly. Organizations accept that some capabilities will require custom development or specialized third-party solutions. The decision hierarchy typically follows this order: first, adopt features from enterprise platforms; second, implement specialized third-party solutions with low cost and minimal integration requirements; third, build custom solutions only for select use cases where organizations want to lead innovation or where technology maturity remains insufficient.
These decisions often arise when senior clinicians encounter new tools at conferences or through sales presentations. High-level physicians request solutions that then trigger formal evaluation processes. IT governance processes must assess whether existing tools provide requested functionality, whether core platform vendors offer alternatives, and whether introducing out-of-suite solutions justifies the complexity.
The challenge intensifies in large health systems with multiple regions or hospitals. Different facilities may request solutions that overlap or conflict with tools already in use elsewhere in the organization. Medical directors and informatics leaders must evaluate business cases beyond simple financial return, considering whether introducing non-standard solutions makes sense and, if valuable, why the solution should not deploy system-wide.
Take it Away
- Establish stewardship by assigning specific business owners to data assets, metrics, and solutions who take responsibility for definitions, calculations, and accuracy
- Incorporate frontline workers into governance processes by educating registration, scheduling, and coding staff about downstream implications of data entry
- Implement centralized governance for AI that coordinates across legal, privacy, security, and contracting teams while allowing decentralized execution within application domains
- Prioritize features from enterprise platforms before considering third-party point solutions or custom development to minimize integration complexity
- Create formal IT governance processes that evaluate requests for new solutions against existing capabilities and system-wide standardization goals
- Build user groups and extend stewardship concepts to power users, data custodians, and general staff through ongoing education
- Develop centralized access points such as leader data hubs that provide standard reports and decision support tools to prevent inconsistent data sprawl
The path forward requires balancing ambitious pursuit of AI capabilities with disciplined attention to foundational issues. Organizations must avoid letting perfect become the enemy of good enough while maintaining focus on governance, data quality, and clear ownership. Deshpande emphasized this balance. “Data, AI, these digital solutions, they’ll only succeed when they’re embedded in real workflows and aligned with real problems.”


Share Your Thoughts
You must be logged in to post a comment.