It’s amazing the difference 20 years can make when it comes to something like EHR customizations. In the early 2000s, vendors like Cerner prided themselves on the flexibility of their systems, offering users a combination of configurations that could be applied.
Eventually, however, it had to be reined in. As their client bases grew, vendors began to realize that the task of managing thousands of code versions became too daunting, and the move toward standardization picked up steam.
Healthcare leaders also began to understand the impact, particularly when changes had to be dialed back. “When you start doing a lot of customizations, it’s quick and easy to break something,” said Art Ream, Senior Director of IT Applications & CISO, Cambridge Health Alliance. “And when you take regular updates to the core software, it tends to be an issue within IT to test and make sure the functionality stays in place.”
During a recent webinar, Ream spoke with Josh Wherry (Director of Data Management and Analytics, Baystate Health & CTO, TechSpring (a part of Baystate Health) ) and Will Berry (Director of Consulting, Tricentis) about the importance of proper review and testing of customizations, and provided input on how CIOs, CISOs and others can more effectively manage them.
The good news, according to Berry, is that “the industry is moving in the direction of configuration over customization. We want to find tools that allow us to set things up the way we want without having to build our own custom solutions.”
One organization seeking to make that move is Baystate Health, a five-hospital system located in Western Massachusetts that has its own innovation arm. “We’ve evolved a lot in how we handle requests for customization,” said Wherry, adding that BayState has implemented standardized intake and demand processes that are presented to a governance and oversight committee. “Over the years as our EMR has evolved and our other systems have grown, we’ve realized the impacts of any customization or upgrade.” And that means the majority of requests, even minor ones, are thoroughly vetted.
One way to do that is by asking a few simple questions that can provide critical information, such has: do we have something that can already do this? Can this be achieved through workflow as opposed to customization? From there, said Wherry, “We really dig into that technical impact on our system, our functionality, and our workflow with those customizations.”
This is where vendors come in — or at least, they should, he noted. “We work with our vendors to see how it aligns with their roadmaps. It takes longer to go through that vetting, but it also protects us on the other end as we get into implementation.”
It’s a strategy that’s also utilized at Cambridge Health Alliance, according to Ream, where it has yielded positive results. In fact, after checking in with existing vendors, his team often finds that the item being requested is either on the roadmap, or is already available, unbeknownst to users.
If, after all of that, a customization is approved, it’s merely the beginning, said Berry. That’s where testing comes in. “One factor that’s really important with these critical business systems is the associated risk,” he noted. And while the level of risk can differ, it’s imperative to ensure “a high degree of testing,” especially with anything that contains billing or patient data. “You need to understand that even as you’re making configuration changes, you’re evolving that application in a direction that you need to be able to understand and measure. You need to be able to associate risk to how you’re going about testing and make sure you’re always covering a large portion of your risk as to what you might potentially expose.”
With that in mind, Berry and his co-panelists provided best practices for testing customizations.
- Bring it to the front. “It’s very important that testing begins at the front of the organization when these requirements come in, where you vet and decide whether these things work across the application portfolio in what you’re trying to accomplish, and whether or not you have channels in order to be able to do that,” noted Berry. “You want to make sure you’re always starting in the beginning.”
- Think broadly. It’s important that testing doesn’t focus too much on an individual object, but rather has a broader reach, he added. “Otherwise you lose the view of the impact from an overall company perspective.”
- Test throughout. At BayState, testing is done at different levels, according to Wherry. “Once we get to a 100 percent build, we look at doing some level of system and integrated testing to see the impact of downstream and upstream flows.” As changes go into production and the system gets increasingly complex, “we’re trying to be more robust in our testing processes.”
- Be direct. It’s critical to keep design and use cases in mind, said Berry. He has found that as organizations work more with applications, they often take on a different life than the vendor intended. Therefore, “you have to be very direct in the way you test your software, because it’s relevant to your processes and your people.”
- Leverage SMEs. Berry strongly advised leveraging subject matter experts, and empowering them with tools that allow them to do things like automation. That way, “they can create assets that align with the direction in which you’re trying to take the product.”
- Have a backout plan. In any implementation, it’s a possibility that an issue will arise, causing the need to back out and revert to the previous version, said Wherry, whose team has learned this “through a lot of pain points.” Their strategy, therefore, is to “step very quickly into understanding the process and try to recreate it very quickly based off of the workflows that have been described to us. Hopefully we can identify that process.”
- Know your limits. It’s important to know that most vendors have a line when it comes to customization, and if it’s crossed, it can affect the level of support offered, he noted.
- Understand risk, but don’t let it stop you. Innovation, as Berry pointed out, always comes with some level of risk. Organizations that are willing to push out a little bit further are likely to gain market share. And although risk has to be considered, it shouldn’t necessarily be done at an individual department basis. “If you let everyone measure their risk, you end up with a feeling of ‘everything is impossible,’” he added. “But if you can get to a level where you really understand these core processes and what those outcomes are, you can better understand how we push innovation.”
Finally, he urged attendees to think through the process carefully. “That’s why I said testing begins at the requirement. You have to think long-term, so you can’t create technical debt,” said Berry. “You want to be able to push the envelope in terms of what the business has seen and what the business wants to drive.”
To view the archive of this webinar — Optimizing Your Software Testing Process to Increase Throughput and User Satisfaction (Sponsored by Tricentis) — please click here.