Tapping into advanced analytics and automation, the pediatric hospital focuses on outcome-first AI design

Alda Mizaku, Chief Data and AI Officer, Children’s National Hospital
When Alda Mizaku assumed the role of Chief Data and AI Officer at Children’s National Hospital, the position did not yet exist. With a background in biomedical engineering and predictive modeling, she brought both technical and clinical perspectives to what would become a transformative role. Her first priority: establishing the data foundation necessary to drive analytics and AI across the organization.
Children’s National, ranked among the top five pediatric hospitals in the United States, serves the Washington, D.C., Maryland, and Virginia regions and attracts patients from around the world. In an environment defined by both complexity and scale, Mizaku’s mission has been to develop enterprise-wide capabilities while demonstrating immediate value. That required rethinking traditional sequential approaches.
Podcast: Play in new window | Download (Duration: 38:58 — 26.8MB)
Subscribe: Apple Podcasts | Spotify
“We really had to build the plane while flying it,” she said. “It’s not how we’re trained, but it doesn’t work in healthcare to wait two years to build the platform before showing results. We had to bring value quickly while establishing the infrastructure underneath.”
This approach included launching a cloud-based data platform and defining a centralized enterprise data model—a single source of truth to support analytics, automation, and AI initiatives.
AI as a Means, Not an End
While excitement around generative AI and autonomous systems grows, Mizaku emphasized the importance of beginning with problems—not technology. She described AI as a toolbox containing multiple instruments, from predictive models to generative agents to automation workflows. Selecting the right tool, however, requires clear alignment with operational goals.
“Technology for technology’s sake doesn’t accomplish much, particularly in healthcare,” she said. “We have to start with the problem, understand the clinical and operational context, and then determine the right tools to support it.”
This mindset also helps temper the appeal of the so-called “shiny new toy.” Mizaku noted that while breakthrough technologies such as GenAI demand exploration, feasibility and value assessments are essential. Even high-impact ideas must be weighed against resource constraints, with questions of scalability, safety, and implementation cost playing a central role.
Data itself becomes a bridge between theory and practice. Understanding how workflows truly function—through both direct observation and analysis of real-world data—enables leaders to identify variations, bottlenecks, and inefficiencies that might otherwise remain hidden. That intelligence, in turn, guides not only which technologies to adopt but also how to apply them effectively.
The Paradox of Change
Implementing AI at scale requires more than good technology. It requires trust, governance, and cultural buy-in. According to Mizaku, a successful strategy for change relies on two equally important components: engaging decision-makers directly in the design process and leveraging early adopters to build momentum.
“You want to do it together,” she said. “Bring people along from the start, get their fingerprints on the solution, and let them help shape it. That way, it becomes theirs.”
But for every enthusiastic partner, there may be skeptics. Mizaku’s approach to scaling innovation focuses first on finding champions who are eager to lead. Once a project proves successful, its results serve as a proof point that encourages adoption elsewhere in the organization—often in areas that initially resisted change.
Another nuance of the role involves balancing competing forces. On one hand, Mizaku’s team is tasked with enabling broad, safe use of AI technologies. On the other, they must manage a growing list of requests from advanced users eager to pilot new and often unvetted tools. Establishing a responsible AI use policy was a foundational step, allowing Children’s National to safely approve and support tools across departments. Yet even with these resources in place, a gap remains between what’s available and what’s being used.
“There are tools in the environment that people should be using but aren’t,” she said. “We have to look at utilization and ask, what’s the barrier? At the same time, we’re getting requests to bring in tools we can’t yet support. So we’re both pushing and pulling at the same time.”
Embedding for Impact
Embedding AI into clinical and operational workflows—rather than adding it on top—is critical. Mizaku emphasized that AI must generate actionable outcomes that integrate seamlessly into existing systems. Simply generating insights is not enough; automation and orchestration must follow.
She described a use case in which AI identifies the five next steps a patient should take based on their clinical record. While reviewing that list for accuracy is important, it’s even more important to automate how the information reaches the patient—whether through an EHR message, phone call, or app notification. Humans still have a role, but it must add value beyond verifying AI outputs.
In some cases, AI can even help validate itself. By chaining multiple models together—where one model checks the accuracy of another—Mizaku believes health systems can improve trust and reduce the time required for human review. Still, transparency around training data and algorithmic behavior remains vital, particularly when evaluating third-party tools.
This is especially important in pediatric care, where populations and datasets differ significantly from adult medicine. Health systems must press vendors for clarity around how models are trained and validated. Without that due diligence, risk grows—along with mistrust.
Take it Away
- Build infrastructure and deliver value simultaneously to avoid lagging impact
- Use a centralized data model as the foundation for analytics, AI, and automation
- Prioritize problem-first design over technology-first implementation
- Engage operational leaders early in the design process to promote ownership
- Start with AI use cases that have high impact and enthusiastic champions
- Balance governance and innovation by enabling safe tools while vetting new ones
- Increase utilization of existing approved tools before pursuing new deployments
- Embed AI outcomes directly into workflows to create action, not just insights
- Chain models to validate outputs and reduce risk of hallucinations
- Demand transparency from vendors on training data and bias in third-party tools
“We can move quickly if we do it with intention,” said Mizaku. “When we combine trusted data, the right people, and a focus on value, that’s when change becomes possible.”


Share Your Thoughts
You must be logged in to post a comment.