“4 days to go live” — those words can instill fear into even the most seasoned CIO if the comfort level that everything is ready to go is not there. Luckily for me, I have a great team and, while there are still minor concerns, we are where we need to be for a successful go live. More importantly, we have sat down and reviewed all the critical failure points and attempted to establish fallback and contingency plans should the worst happen.
In the last 4 weeks, we have been at a mad scramble. Meditech recommended moving out the go-live to post next AUG (the next available opening). Needless to say, we reviewed the open tasks and determined we were not far off where we needed to be. Our team then put in a heroic effort to bring things up to par, leaving everyone with the proper comfort level. Additionally, we brought back some external consultants to work remaining tasks that just needed more bodies thrown at them, such as recreating NPI reports.
In terms of reports, I would like to point out we had over 300 requests for reports to be rewritten in the 6.x product. Even after pushing back on the user community and asking them to review the 6.x functionality and ensure that the information on those reports is not available as part of 6.x workflow or existing canned reports, we still got back over 300 requests. Our partner/vendor reviewed the list provided them and was able to eliminate 80 almost the first day as being available in 6.x. The moral of the story is that most IT departments will be too busy (especially late in the game) to review all the customer requests and validate if the functionality exists or not. If I had to do it over again, I would have asked the users to review reports much earlier in the project and present a justification statement as to why the information does not exist in the 6.x product. I would then have gotten that information out to the vendor world earlier.
Another thing that posed a challenge is the interface development. As some of you who have read my past postings will recall, we increased the number of interfaces significantly as part of this project. We needed every Magic interface migrated to 6.x. This was a new challenge for many of the ancillary vendors and even for Iatric who performs much of our interface work. One huge challenge was that we required a stable 6.x test environment with orders, CDM, and other dictionaries complete to test the interface work interconnectivity to other vendors’ test environments. That did not come until just this month, so interface testing became harried and people were tripping over each other.
In retrospect, I would have liked for the 6.x test environment to be live and at least semi-locked two to three months pre go-live to ensure plenty of time for interface testing. This is especially true given that vendors were building and testing interfaces to over five ancillary systems, many of which had three environments (Magic, Magic test, and 6.x test). One even had four environments due to needing a 6.x pre-live to test from; and a 6.x test to continue to build changes in (order sets and CDM dictionaries were still in build in 6.x and needed to be mirrored in the non-Meditech system). Obviously, even with good change control, staff and vendors occasionally confused which environment needed work and what interface to change or test with.
OK, so many of you say, “Well, this just validates that working in a single vendor environment like Meditech makes support and implementation so much easier.” I will counter that while there is some validity in that, most of the ancillary systems we have are not provided as modules by Meditech (oncology, PACS, or a practice management system) and you would still have dictionary build and other alignment even if they did exist. Also, if we had been pure Meditech, we may have short-changed ourselves on by limiting efficiency, user acceptance, or other benefits sometimes provided by the best-of-breed approach. Like other CIOs, I tend towards the Core Vendor approach. You look at the module and if it passes the 80/20 test with your Core Vendor, then stay with that product; but if it does not, accept the integration cost and challenge as part of the trade off towards gaining other benefits.
In terms of time, I think one lesson learned is the closer you get to go-live, the more meetings and phone conferences you can experience. As previously stated, Meditech tried to get us to push out the go-live. While this may have been a tactic to get us to close out those open tasks we had (or at least the more critical ones) or to make us more aware of just how little time was left, it did energize activity on our end. It also led to having three to four meetings a week with Meditech instead of the one that we initially had. Interestingly, though Meditech was great at pointing out what key tasks we had overdue on a daily basis, they did not seem to address them, and we too had to point out to them that there were numerous tasks in need of their internal escalation.
One of those was printing. All the Meditech 6.x CIOs I spoke with stressed you cannot test printing enough. In clinical and other areas, we had an issue where some reports and outputs were printing as what looked like ink blots. We went back and forth with Meditech on this and finally found a work around, but it was just one of the numerous printing issues we found. Again, in hindsight, I would suggest you start tackling printing at least four to six months before your go-live — and test, test, test. Your NPR reports will be an issue.
Another printing issue will be labels. Remember, you will need to format and print not just medication labels but dietary, wrist bands, and other labels. Your printing resource will be very busy and can easily get behind on completion dates. Your hardware staff may have to order and deploy additional barcode or other label printers, and that can take time, especially if the order was late in the project and the users can’t test their processes.
If you are going live on Medication Administration (BMV/PCS) as part of your upgrade (especially if you did not use it before), don’t forget you will need to establish a cross-functional work team to discuss and implement downtime procedures for how nursing will manage medication reconciliation, pass medication, and chart if Meditech is down or some other problem exits. Without going into our solution, I will only say that many meeting were had and various options looked at. In the end, the impact on IT equated to numerous hours of work from our support staff at a time when we could least afford it. This should be looked at much earlier on and given an internal task and delivery date well before go-live.
Another increase in meetings was due to the interface work. One thing we found was that instead of people working disparately on an interface (e.g. an interface programmer making a change, then your staff testing later which requires a vendor to change something, and then restarting the cycle) it was much more efficient and satisfying to have everyone involved in an interface on a call at the same time. That way, a change can be made, immediately tested, and all provided real time feedback. This process is not the easiest to coordinate, especially with vendors where their interface or support team is working with numerous customers and has limited availability. It may require escalation from the CIO or project lead to get their availability and focus, but pays off huge dividends in overall time savings.
Also, just because you think something is working, don’t assume it will continue to do so during the duration of your build. We had users that had to be rebuilt several times because their rights or existence changed. We had conversion files that only changed part way through or hung the whole system. We had order sets that changed and needed to change again after interface testing started. CDM changes also had impact on orders, interfaces, and other systems. The lesson here is that cause and effect can make your life hell when time is of the essence, so try and lock CDM, orders, and other dictionary build requirements in much earlier than even Meditech suggests.
In terms of time drains, about a month before go-live you will also want to discuss your go-live support process. In our facility, we decided on a command center similar to a disaster center model. We included discussions on problem tracking, managing the Meditech site tasks, problem assignment, prioritization, location, staffing, and supplies. Our teams are planning on 24×7 support with 12-hour shifts until we are confident the product is live, stable, and supportable. Also, don’t forget logistical needs such as sandwiches, coffee, water, and other refreshments, especially for your night staff which may not be used to being up all night. Also plan on how you will handle daily shift changes or problem hand offs. All this will take time away from already full days.
In closing, I cannot stress enough that the key lesson we learned is most of the tasks we worked this month should have been completed months ago. For those of you working off the Meditech project plan (Task Listing), I strongly recommend either extending the timeline or adjusting the due dates to have your implementation team get them done at least one to two months earlier than listed wherever possible. In past implementations, I have had the luxury of three to four weeks of basically dead time before go-live to ensure the i’s are dotted and t’s crossed. That clearly is not the case on this project and, while I feel we are ready, wish everyone had a bit more breathing room this late in the game. Next month, I hope to provide real life lessons from our go-live.