We had moved into our house only two years before, and this was getting ridiculous.
It was 2012 and – between Hurricane Sandy and a crazy ice/snowstorm that followed close on its heels – we had lost power at our home in New Jersey for a total of about 10 days. And when you have very young kids (ages 3 and 1 at the time), and it’s winter, having no heat for that long is not ok. When it comes to power, unfortunately having multiple providers in place and ready to go is usually not an option.
Of course, there are things you can do. And after the storms, I went to Home Depot and bought a gasoline-powered generator that – with extension cords – could at least run some space heaters and keep the refrigerator going. After noodling on it, I took things a step further by installing a generator transfer panel and wiring it to our breaker box so I could do away with the cords and even run our home furnace. I thought I was pretty slick.
Of course, my backup plan still either required me to have large amounts of gasoline on hand or quickly obtain it. Since gasoline degrades in months, and is difficult to dispose of, that wasn’t a great option. But it was the best I could do (or was willing to do).
One thing stuck in my head during the outages we had. I would look over to my neighbor – who is the least handy person in the world – and always see that his lights were on, even right after ours went out. How the heck was he doing it? I couldn’t picture him lugging around gallons of gas and pulling the rip cord on a generator.
So I had a conversation with Joe who told me that he’d installed an automatic-failover whole-house generator that ran on natural gas. He basically made one phone call, paid a whole lot of money, and never had to worry about losing power again. In fact, his maintenance contract provided for routine testing so he’d never find out in a pinch that something was wrong. To me, that was luxurious.
After reading publicly available information about the Change Healthcare breach (both causes and effects on customers), I feel that something like the whole-house generator approach is what health systems, and others, need to adopt when it comes to certain “can’t-fail” functions, data flows or vendors. The first step is identifying these critical functions. Based on the Change incident, I’d suggest some criteria:
- The third party performs a function that the health system cannot survive without for very long
- The third party is the only entity that provides that function for the health system
- The third party does not have a plethora of competitors, thus the health system has a limited number of alternatives
- It takes a significant amount of time to get up and running on an alternative if the third party can no longer function
And here’s a fun equation to think about: if the time it takes to get up and running on an alternative (D) is greater than the time you can survive without the key vendor’s services (A), then, Houston, we have a problem.
Interestingly, the Change Healthcare breach is seen as a cyber incident, and while it was certainly that for Change, it was, in a sense, not for health systems, as losing Change’s services for any other reason (fire, hurricane, earthquake) would have had much the same effect – a key service provider could no longer provide its services; and an easy failover to another such provider didn’t exist.
Health systems – with the help of the entire IT C-Suite and Emergency Management – need to map their data flows to figure out the Change-like choke points that meet the criteria I described above. Then, they need to make sure they have more than one vendor in that space set up, ready to go, and used at least to a small degree. It may be good enough to have 90 percent of your services going through one preferred pipe and 10 percent through the other, if you can quickly shift traffic if one goes down. As long as there is more than one provider of such services, a Change-like outage, planned for correctly, should be less-catastrophic in the future.
With a billion things to do and finite resources with which to do them, it’s no surprise things like this go unnoticed or, if they are, tackling them is too dauting so people just hope for the best. But when things finally go boom with us on the wrong side, it’s incumbent to learn from such painful lessons, go to the whiteboard, and figure out where similar risks are hiding in plain sight.
This time around, nobody blamed health systems for any downstream impacts to their service levels but, if not better prepared, the next such incident might not be met with comparable grace and understanding.
Share Your Thoughts
You must be logged in to post a comment.