The more you know about your customer, the better you can serve them. And that definitely goes for health system CISOs trying to serve (and protect) their clinician customers. As Dr. Eric Liederman says, it’s not that hard to lock things down; what’s trickier is putting in place as much protection and risk mitigation as possible without compromising the ability of doctors to deliver the care that patients deserve. Liederman, who serves as National Leader for Privacy, Security & IT Infrastructure with The Permanente Federation, says the key is for cyber leaders to partner with security-minded clinicians like him, who can both help them understand the complexities of patient care, and educate doctors on the nuances of cyber risk. In this interview with healthsystemCIO Founder & Editor-in-Chief Anthony Guerra, Liederman covers these issues and many more.
LISTEN HERE USING THE PLAYER BELOW OR SUBSCRIBE THROUGH YOUR FAVORITE PODCASTING SERVICE.
Podcast: Play in new window | Download (Duration: 41:21 — 28.4MB)
Subscribe: Apple Podcasts | Spotify | Android | Pandora | iHeartRadio | Podchaser | Podcast Index | Email | TuneIn | RSS
Bold Statements
I realized pretty early on that, if we didn’t also attend to the protection side, we were building our castle on sand and it would ultimately risk collapse.
I really like working with my colleagues who are cybersecurity professionals. They know how to protect us, but they don’t know how to protect us while still allowing us to do the complex work of patient care. That’s something that we need to partner on.
… if you have a system where you’re having people monitor things – the term in IT has been eyes on glass – and you set it up so that they’re just being overwhelmed with false-positives, you really don’t have eyes on glass. You just have eyes glazed over.
Anthony: Welcome to healthsystemCIO’s interview with Dr. Eric Liederman, national leader for privacy, security and IT infrastructure with The Permanente Federation. I’m Anthony Guerra, Founder and Editor-in-Chief. Dr. Liederman, thanks for joining me.
Dr. Liederman: I’m glad to be on. Thanks for inviting me.
Anthony: All right, great. Let’s start off, tell us a little bit about your organization and your role. Everyone’s heard of Kaiser Permanente but The Permanente Federation could probably use a little clarity. So tell us a little bit about that entity.
Dr. Liederman: Well, Kaiser Permanente is an interesting organization the way it’s structured. Most organizations are corporations, right? They have a board of directors, a CEO and a bunch of people who work for that CEO on down. In our case, we actually have multiple companies that consist of, or come together as, Kaiser Permanente bound by unique contracts, one to the other. So we have eight medical groups, Permanente medical groups, and those are the physicians, and in the case of the California medical groups, including my own, also most are employees. The Permanente Federation is a small organization that supports all of our medical groups across the country. I also work for the Permanente Medical Group, which is the one in Northern California. So I’ve got multiple hats, multiple roles. The Kaiser Foundation Health Plan, who we’re tightly partnered with, is the organization that, well, they sell the insurance and they administer the insurance. They also own the buildings and they have IT and so things like that. So mutually, an interdependent relationship between the Kaiser side and the Permanente side. I work on the Permanente side obviously.
Anthony: Okay, very good. And your role, a little bit about your role. It’s an interesting role. I think we’re seeing more. I think when I saw you recently at a conference in Boston and you mentioned that you’ve been in the security area of medicine for quite a while, the IT security area. But I think we’re starting to see some more individuals who are physicians specializing in cyber. So even though you’ve been at it a while, I wonder if you would agree that it also is picking up steam now.
Dr. Liederman: Yeah. It’s interesting. I don’t know how to – it’s interesting, saying specialized in, that’s a medical term. I mean, my specialty is internal medicine. I’m a physician. I see patients. And I also do this other work, the privacy security work is part of what I do. But it’s an important part, protecting our information, protecting our patients, protecting ourselves is critically important. If we don’t, then everything that we do, everything that we’ve built is at risk. And so, I got involved with this early on when I was involved with developing some homegrown systems to another organization to provide access to information electronically and then implementing a vendor electronic health record. I realized pretty early on that, if we didn’t also attend to the protection side, we were building our castle on sand and it would ultimately risk collapse.
And so, I’ve been focusing on both sides for a long, long time. And I’m glad to see that there have been, as you point out, more and more – still a relatively small group of us, but more and more physicians who are engaged in this space because the challenge is that … I mean, cybersecurity professionals know what they’re doing. I really like working with my colleagues who are cybersecurity professionals. They know how to protect us, but they don’t know how to protect us while still allowing us to do the complex work of patient care. That’s something that we need to partner on. It’s very straightforward to impose controls to reduce cyber risk, but those controls often can increase operational and therefore patient care risk in a patient care organization. And so, it’s a partnership. I don’t pretend to know what they know, although I know some of it. And I think they, I venture to say, I think they have found the partnership useful and effective from their perspective, but I would leave it to them in a separate interview to ask them that question.
Anthony: I think what you’re saying is, usability is defined by the user, and you are representing the physician to help the cyber folks understand what usability means to them in terms of the speed they have to work at, the information they want available. What interferes with their workflow? So that’s the key because they may institute something or implement something or want to implement something in a way that they think is reasonable, but you’ll say no, because of the way physicians work, we need to do this a different way, or we can’t do it that way. Is that what you’re saying?
Dr. Liederman: Well, that’s a unidirectional approach. And yes, I do all those things, although I don’t typically try to say no. I typically try to say, well, how about this other way? Why don’t we think how we can achieve both reduction of the risks that you’re focusing on without increasing risks on the patient care side, or even better yet, we’ve had a few wins where we’ve reduced risk on both sides, which is really a great achievement. But I also engage the other direction, right? I mean, I work with my physician and other clinical colleagues, not just physicians, to understand security risks. I mean, the fact is that, ultimately, almost all successful attacks start with social engineering, conning people. Phishing is the most prevalent way of doing this, but it’s not the only way. I mean, people making phone calls is another way or text. We’ve all suffered from this, right?
We all get these in our personal lives and maybe even also in our professional lives. I work closely with my colleagues. I have tens of thousands of physicians and hundreds of thousands of other clinical colleagues who I work with who really need to understand the risks of security and what they can do about it for two reasons. One is to protect our organization, but the other is protect themselves. We have said countless times to my compliance and cyber colleagues about the folks who are on the right-hand side of the tail – who are clicking on multiple test phishing emails over and over again – that these people are not our enemy. These people are at risk. They probably have their bank accounts cleaned out already. We need to help them.
Anthony: Absolutely. If they’re clicking on everything at work, we assume they’re also having some challenges in their personal life, right? (laughing)
Dr. Liederman: Right, exactly.
Anthony: What would you say is the main thing that people who are non-clinicians don’t understand about the way doctors work or the sensitivities that they may have to interference in their workflow?
Dr. Liederman: Well, I think the most common misconception is that the way to understand aberrant behavior is to look for behavior that’s not like the norm. Now, on the face of it, this sounds obviously true. But in medical care, it’s obviously wrong. The reason is that, while folks do the normal stuff we expect of them over and over again, a lot of the time, maybe even 80 plus percent of the time, they have to do different things. And those different things can be driven by operational reasons or patient reasons. So for example, operational reasons. So all of a sudden, there’s been some mini crisis. A bunch of people called out sick or whatever, we have to suddenly delegate or remove people from their normal jobs and these other jobs temporarily.
And by the way, this happens constantly in hospitals. A nurse will show up for work and say, you know what? You were going to work on this unit, but we need you to work over on this unit, or we need you to cover lunch over here or whatever. This happens constantly. But also, patients change their status and sometimes very rapidly. If you have a patient that goes from stable to unstable or trying to die, really unstable over a short period of time, all kinds of folks who had nothing whatsoever to do with that patient’s care 10 seconds ago are going to be descending upon them and needing access to their information and access to the patient and the ability to write orders, et cetera, and do things very quickly without any frictions because time is of the essence.
If somebody’s having a stroke, every second they’re losing thousands of brain cells and they’re not going to come back. We need to do something now to fix that. If somebody’s heart has stopped, the oxygen is no longer flowing to their brain or other vital organs, we’ve got to do something about that or else they’re not coming back. And anything that slows down ability to access information or be able to take care of patients, specifically adversely affects really sick people. And ironically, attention can be focused on people who are considered VIPs, right? So if you have somebody who’s considered, wow, especially at risk or kid gloves, maybe it’s the CEO of the organization or it’s the governor or the mayor or some sports star or A-lister or an actor, well, they’re going to be treated like they need to lock things down even more.
But if you put these two things together and you take one of these VIPs that’s getting really sick and you impose these kinds of frictions and controls, you’re basically creating a situation where your organization selectively kills off VIPs who are getting sick. And that’s really not a great business model.
Anthony: I guess that’s a case where strictly instituted identity and access management – which we know is really important – can interfere with patient care.
Dr. Liederman: Yeah, I’ll give you an example. So I was talking to a colleague recently who used to work at NASA, a cybersecurity professional. And they were explaining to me that the Office of Personnel Management, your listeners probably remember they had their clock cleaned, probably by the Chinese, a number of years back, they got religion around cyber controls as a result and insisted on imposing those on everybody else, right? And so, they told NASA, you’ve got to have two-factor authentication on your workstations in mission control at NASA. Okay, so that means you’re in probably one of the most secure buildings in the world would be my guess – I’m pretty confident they’re not letting anybody just walk into mission control, right? – and yet they have to use two-factor authentication every time they log in to their workstations. So they tried to insist on that with us as well at the same time.
And we said, well, we’ll do multi-factor authentication on all of our devices including those in our buildings that we own and control. We’re going to do it a little differently. And so, what we did is we put a security certificate on every one of our managed workstations. And then, we declared that the workstation itself was the second fact, because it’s not going anywhere. It’s in our building. It’s managed by us electronically. And it has a security certificate on it. And so, therefore the clinicians logging in just continue to use their user ID and password, right? And nobody’s fumbling, trying to find their token or their phone or whatever, or maybe they left it at home because they rushed in. They’re on call and there’s somebody who’s really, really sick. And now they can’t take care of them. We don’t want any of those scenarios. So anyway, basically, there are ways to try to thread the needle. And that’s an example of one way that we tried to be creative.
Anthony: Very good, very good. I’m going to ask you an open-ended question. At the conference, you talked about – what you just mentioned around insider threats, snooping and that type of thing. People poking in. You mentioned VIPs and how ironically VIPs having their information locked down because they’re VIPs and you don’t want snooping can put them at a higher risk, which is interesting. You also talked about something at the conference called false-positives and how that can damage a cyber program because you just get burned out – you’re looking at too many things that aren’t going anywhere and it makes you like the boy who cried wolf, right? Tell me a little bit more about the false-positive concept you were talking about. Specifically, the scenarios that you’re thinking of.
Dr. Liederman: Well, the term to describe it has been called alert fatigue and it’s not unique to cyber. It’s not unique to IT. I mean, alert fatigue can happen on analog devices. I remember a long time ago, in my career, there was an attempt by folks in a particular hospital. This is pretty early on. They decided that they would alert the nurses at the nurse’s station that a patient had something going on. Maybe their cardiac monitor showed something abnormal by having a flashing red light in the nurse’s station. And they went back to check on it a week or two later, and they found that it was covered in several layers of tape that the nurses had basically blotted it out physically. They just couldn’t take it anymore. And there are lots of examples where pharmacists just can’t take it anymore. I mean, the alert levels for pharmacy information systems historically have been set way too low. And so, lots and lots of meaningless or not meaningful or not significantly meaningful alerts were fired.
And how do you sort out the wheat from the chaff? I mean, it’s just overwhelming and it leads to ignoring all the alerts. This is true in cyber as well. If you have a system that is alerting to the possibility of somebody misusing their privileges or an attacker in the system but it’s firing off too many false-positives, then the humans just can’t take it. I mean, the only way they can defend against it is by just ignoring all these alerts. You can’t go chasing down 100 false trails to find one real event. You have to set things so that, at most, maybe in my experience people can tolerate about 20, 30 percent false-positives. So if there are 10 hits, people can tolerate chasing down two or three falsies but the rest of them are real. They can deal with that. But if it’s much more than that, it’s very hard psychologically to manage. And so,if you have a system where you’re having people monitor things – the term in IT has been eyes on glass – and you set it up so that they’re just being overwhelmed with false-positives, you really don’t have eyes on glass. You just have eyes glazed over.
Anthony: Right. And so, you have to be able to accept a certain amount of risk there, right? Because if you’re trying to look at everything because you’re afraid something is going to slip through, you have to say, we can’t do that. So we’re going to say, okay, maybe something is going to slip through but we have to get to that rate you talked about. Can’t be higher than 20, 30. Otherwise, it’s just not going to work. So we’re going to have to live with the outlier getting through, right?
Dr. Liederman: Well, right. There are different ways to manage this. So some organizations externalize the alert fatigue, right? So we’ve all had this experience where we’re shopping and our credit card gets declined because we’re in a different city than we’re used to, right? And so, there the financial services company has externalized the alert fatigue to us as individuals, right? We have to call them and explain that it’s a false-positive, right? So that doesn’t work really well in healthcare. You can’t say, oh, well, you can’t get access to that patient. Oops, sorry. After an hour later, after you’ve gotten a hold of the help desk, oh yeah, well the patient expired in the meantime. I’m sorry. We can’t do that. You could say you have to accept a certain amount of risk but I would put it differently. I would say you have to balance the risks so that you have the lowest aggregate risk. There’s no way to make risk disappear. Any attempts to push a single risk down to near zero is just going to push other risks up. You have to look at the forest, not just one tree at a time.
Anthony: What else? Your day-to-day, the stuff you’re working on right now, what are some of the big issues that you are working through on the cyber front in trying to help the entity that you work for manage?
Dr. Liederman: I would say holistically, Anthony, that the threat landscape just keeps evolving, right? I mean, the risks keep changing. Our adversaries are well funded partly because we, the larger we, keep funding them. We pay them, the larger we. I’m not talking about my organization per se. But I’m saying that they extract a whole lot of money. They either steal it directly or they charge ransom or whatever, and then they invest in R&D. I mean, what worries me is that we may be constantly prepared to fight the last battle and maybe not the next battle. It’s maybe a meta way to have worry about it but the specific worries I have oftentimes, we find ways to mitigate those. But what about the next thing that we haven’t figured out yet? This is a great concern to me.
I’ll just give you one really large example, which a lot of people talk about and write about, which is the maybe imminent, maybe not so imminent, the onset of quantum computing, right? Will quantum computing basically destroy all our defenses? Will quantum computing make encryption irrelevant and just allow adversaries to crack all our passwords immediately and crack all our communications. I mean, I don’t know. That seems like a very frightening future. Are we prepared for that? What is coming up next and what’s coming up a little bit later? I guess, those are the things that keep me awake sometimes.
Anthony: Yeah, absolutely. That quantum computing concept of that singularity or whatever you want to call it. When they get that done, hopefully the good guys are ready for that or preparing for that day. Like an arms race, right? It really is an arms race that you have to keep outperforming the opponent. AI is another thing that you worry about that the bad guys will figure out how to use it better than the good guys and faster. They’ll somehow manage to start an attack, leveraging AI and maybe quantum computing, right? That’s the fear. You talk about the fear is what could happen. So what do we do, we just do our best every day?
Dr. Liederman: Actually, as you know, at the conference, I was on a panel talking about AI and cybersecurity. Yeah, there are a lot of risks that AI poses to us, a lot of promise that AI holds for healthcare. But in terms of the risk side, AI holds the possibility of allowing rapid development of attacks against vulnerabilities. ‘Define all the vulnerabilities in this organization and create an attack pattern for me.’ I mean, it could be that A, the people who are sophisticated will get better tools and the people who are unsophisticated will grow in large numbers and we’ll have many more people attack many more organizations. The other thing is that, on the positive side, AI could be used to find vulnerabilities in the software development process and eliminate that. And so, there’s the hope that new software coming out would be less at risk. Maybe substantially less at risk.
But that still creates a legacy risk for us because we’ve got all this old software. And therefore, there’s an imbalance. The attackers with the AI, at least for some period of time, maybe a decade or more, would have better use of it than our use in terms of defense and therefore, we’ll be on the back foot. Also, of course, as all my CISO colleagues know, and as some of them periodically mention to me, the attackers only have to get it right once. We have to get it right all the time. There’s a fundamental imbalance anyway.
Anthony: Yes, at the beginning of our talk today, you said that one of the reasons that you became interested in cyber was that you didn’t want healthcare to be built on a house of sand, the expression you used.
Dr. Liederman: A foundation of sand.
Anthony: A foundation of sand, foundation makes more sense. (laughing) A lot of times, when I do interviews with different cyber folks or listen to presentations, read articles, that’s what I feel like. I feel like it currently is built on a foundation of sand in the sense that it’s extremely vulnerable. And I wonder if you feel the same way or if you feel a little more confident in the ability of health systems to survive being targeted.
Dr. Liederman: Well, as I understand it, remember, I’m not a cybersecurity professional, but I work with many cybersecurity professionals. I’ll tell you what I’ve learned from them. I think I’ve got this about right. Most of the nation states are not interested in being found out, right? So they’re not going to basically detonate ransomware in our environment and make clear that we know that they’re there and that we want to pay them. They want to come in very stealthily and they want to take our data. Maybe it’s research data. Maybe it’s troves of patient data. I mean, this all started, of course, in 2015 with the big Anthem hack.
Some nation states, as I understand it, are mafia states and they act more like cyber gangs and maybe they do want to come in to extract money out of us by any means necessary, just like cyber gangs do. The cyber gangs, many of them are, as I understand it, are very wealthy at this point, as I mentioned. I mean, they’ve got resources to launch attack against us. How do we know this? Well, one way we can know is we can see the impact. I mean, we’ve had one large health system after another get hit and taken down, including sometimes whole countries. I mean, look what happened to Ireland. Their whole health system got taken down. But even the United States, I mean, very large health systems getting hit. You’d think on the face of it that a large health system would have the resources to invest in cybersecurity and defend itself.
Maybe they weren’t doing enough. Maybe they were and they got hit anyway. It’s very frightening. I mean, I feel constantly like we’re in the trenches in World War I and we’re seeing the guy over there just got blown up and the guy over there just got shot. When’s my turn? It’s very, very scary.
Anthony: Excellent point and I think very well said. And it does give you a lot of anxiety. And so, one of the ways to deal with that anxiety or one of ways to process it or how precarious everything is, is that operational resilience, which is, if and when the systems are not available, how prepared are we to continue operations or how prepared are we to decide in what ways we’re going to try and continue, when and how are we rerouting patients? How do we keep that fluid situation handled properly? How do we give people some idea or some preparation to be able to operate and provide clinical care without the systems that they’re used to, without the tools that some of the younger ones have never operated without? So I’d imagine that’s also a big part of what you think about perhaps and work with the cyber folks and the clinical folks because we want to be able to move forward somehow. What are your thoughts there?
Dr. Liederman: Well, two things. First, I think there’s more and more awareness than I think. We’ve been working in my organization for quite some time on developing operational resilience. I mean, it’s a long journey. We’ve got a lot to do. We’ve done a lot already but we’ve got a lot more to do. I think other organizations are doing this as well, at least tackling it. I’d say that the evidence in effectiveness of operational resilience, however, unfortunately, isn’t really obvious quite yet. I mean, if you look at how long a health system’s basic technology functions like electronic health records are down after a ransomware attack, it hasn’t really changed in the last few years. It’s still three, four, five, six weeks. It hasn’t really changed. And the impacts on the ground are often hard to discern.
I mean, for instance, some of these health system attacks over the last year, year and a half or so, we haven’t been able to get any information from any of the directly affected folks in those health systems because they – I guess their lawyers – tell them not to talk to anybody, which I think is highly problematic and dysfunctional for all of us. So where do we get our information? Well, some of it is individual clinicians posting on Reddit or other sites. And so, we take the crumbs where we can find them. But what we find is, weeks in, there’s still chaos, pen and paper, no phones. How could that be, right? So you don’t have operational resilience yet in most of these systems, it seems. And as a result, patients are at great risk. The clinicians are in terrible, terrible situations and are in terrible risks themselves. I mean, who wants to work that way? Through the pandemic, we’ve had a lot of people leaving clinical care because of burnout. This is only going to contribute to it.
And so, this is a critically important area. But on the other hand, it’s a tough sell in a tough financial environment, which many healthcare organizations find themselves in because, in effect, it’s insurance, right? It’s insurance that you hope never to use. And so, it’s a cost center like all these protections are. This is a constant battle. So in effect, I think that what we all have to do – all of us who are in a position to sell these ideas to senior leadership – is to make it clear that it’s not really a matter of ‘if’ anymore, whether we’re going to get hit, it’s just a matter of when. The sophistication of the attackers and the persistence of the attackers is such that we should probably just count on it. Hopefully, it will happen further out in the future when we’re more ready, but we’ve got to get ready.
Anthony: Interesting. I wonder if it’s fair to ask clinicians to continue to practice and give care without the tools they are used to. It’s a tough situation to put them in.
Dr. Liederman: Yeah. Interestingly enough it appears, based on what I’ve understood, that the biomedical equipment itself typically is not directly attacked by ransomware attackers. I mean, maybe that’ll change. Maybe, I don’t know. Everything’s possible. But so far, the lab analyzers, for instance, don’t seem to be attacked. The imaging modalities, the CT scanners, the plain film x-ray machines, MRI, et cetera, continue to operate. What goes down is the network, the operating systems that basically run everything. The pharmacy information system, the laboratory information system, the imaging information system, the electronic health record; and these become either inaccessible or flat out either attacked or taken down.
And so, from what I understand, in my conversations with those who are willing to talk to me and my colleagues about this, it’s possible to get labs done to a limited degree, and it’s possible to get radiology studies done to a limited degree, but the ability to get the results of those studies is extremely restricted. So typically, as I understand it, in organizations that have been hit with ransomware, the only place that you can view the image that was acquired is on the machine that acquired it. So you have to find out which machine that was and go to the machine physically and look at it. And while you’re looking at the image on that patient, no other patients can have their images acquired, right? It’s one or the other. You either look or you acquire, right? So throughput is drastically reduced.
And then in terms of the labs, without a laboratory information system, using the analyzers directly is extremely slow compared to using laboratory information systems. Your throughput for labs is greatly reduced, and how do you get the information back to the folks who need it? So obviously, the person who ordered the lab, hopefully the lab knows who that is, and you can use runners, maybe a vacuum tube system if it’s in a hospital. But what about the other folks who need to know that lab result? Consultants or others who care for the patient. No way for the lab to know who they are. And in the absence of an electronic health record, how are they supposed to access the information? It all becomes extremely problematic. And our phone systems are not analog. They’re IP. They go down too. How do people call each other? What about directories? Who’s on call? Who’s on call right now for this specialty? How do I reach them? Cell phones will still work, but do we have each other’s cell phone numbers?
I mean, you have to start thinking in terms of preparation this way. What will we still have? What will we not have? How can we make the best use of it? So for instance, something as simple as every single day, every hospital prints out at least one copy of who’s on call and what their cell phone numbers are. And then, if necessary, if that’s the day they get hit, then somebody runs to the copy machine and makes a thousand copies and just starts handing them out to everyone. I mean, these are simple mitigations, but you have to think this way.
Anthony: Right, and the copier has to have ink and it has to have paper and someone’s got to make sure and all that stuff. But yes, I understand what you’re saying. One more question, I’ve already kept you over time here, but I am interested in the state of patient care in a cyber event, which you just discussed. You went into different scenarios about the challenges that could come up. The state of patient care, which could be very fluid, what do we have access to? What don’t we have access to? What can we do for patients? What are the limitations? What’s the impact and how does that effect the decisions around diversion? So as an entity, you’re saying, well, here’s what we can do.
You also want to be informing patients who are either at the system or maybe coming to the system of the current state of care and possible increased risks based on how things have been affected so they can make a decision of whether or not they want to come to your entity and receive care under that situation, or do they want to go somewhere else? So that’s a bit voluntary but then it could also be a decision by the health system that says, based on the current environment, we are not accepting patients for the next six hours because we can’t do A, B and C. You see, all these things are interesting to me and fluid, but real. Are these not going to be real issues in a real-time situation?
Dr. Liederman: Well, let’s take these – these are two separate things. One is, what do you tell everybody and secondly, what divert decisions should be made, right? So in terms of communications, my perspective as an individual, I’m not a lawyer and I’m not the CEO of any of our companies that make up Kaiser Permanente but my view is that we need to be transparent if we get hit. We need to tell everybody what’s going on so that everybody can know and be fully informed and make the best decisions they can make. In terms of diversion, it isn’t as simple as simply saying we’re going to divert; we’re not going to accept ambulances; we’re not going to accept patients into emergency departments, whatever. These are decisions that are made in conjunction with county and sometimes state leaders, right? Because if everybody goes on divert – let’s just take that example.
Let’s say every hospital in a county goes on divert. Well then, where is anybody going to go get care, right? This is not just the decision of the organization or the decision of the hospital, as I understand it. I’m not in the emergency operations that way but this is what they tell me. This is what my colleagues tell me. So it’s really a collaboration.
So my colleague, Christian Dameff, who is an emergency physician down at UC San Diego, recently published a very well-done article on the impact of the Scripps ransomware attack on the UC San Diego Emergency Department, clinics and hospital, right? Basically, this five-hospital system, Scripps got hit and they had their electronic health record down for about a month like pretty much everybody else. And the impact was pretty widespread. (see healthsystemCIO’s Q&A with UC San Diego Health CISO Scott Currie & Medical Director of Cybersecurity Christian Dameff)
Now I knew the impact informally on our organization. We’re also in San Diego, Kaiser Permanente, but we generally see mostly our own members. And so, we’re a little bit more isolated that way. We’re a closed system, although anybody with EMTALA can walk into our emergency department. But certainly, UC San Diego documented they were hit really hard and it really overwhelmed them in some cases. And this is a five-hospital system in a large county. Well, what if a large system gets hit and goes on divert and they represent maybe 40, 50 percent of the patient care beds in a county? Well, that’s just unabsorbable. You can’t double everybody else’s volume of patients and expect them to be able to deal with them.
Anthony: So it’s very interesting and yeah, you make a good point that the collaboration is even larger. There may be even more voices involved in those decisions than just your cyber insurance company and the health system leadership. It could be more folks than that, the FBI. So it’s really interesting. Dr. Liederman, I’m going to give you an opportunity for any final thought. Any final piece of advice you want to give folks out there? We talked about a lot of stuff today. Any final piece of advice for your colleagues?
Dr. Liederman: Yeah, to my cybersecurity colleagues, thank you. And I encourage you, if you work in a health system environment or healthcare environment, please find people who are my colleagues to partner with. It will be to your advantage and the advantage of the organization. And one secret advantage, it isn’t so secret because I actually gave a joint talk with our Chief Information Security Officer at the HIMSS Global Conference this last spring on this topic, it’s actually good job security. Because if you’re making decisions jointly with clinical and other leaders, and the decisions end up not being the best and things go wrong, you’ll keep your job, right, because it’s a joint decision, right?
Anthony: Yeah.
Dr. Liederman: And then my colleagues who are in my side of the fence, I would say, step-up and partner actively with your cybersecurity colleagues. It’s to the advantage of everybody.
Anthony: Great, great advice. Wonderful talk, Dr. Liederman. Thank you so much for your time today.
Dr. Liederman: Well, thank you and thanks for inviting me.
Share Your Thoughts
You must be logged in to post a comment.