As a practicing ER physician, Christian Dameff, medical director of cybersecurity at UC San Diego Health, is one of the first in the nation to bridge the clinical world and cybersecurity, thus bringing credibility and understanding of cyber efforts to the clinical team. Dameff works closely with CISO Scott Currie and, in this interview with healthsystemCIO Founder & Editor-in-Chief Anthony Guerra, the duo talk about how excited they are to team up, while also discussing what the future holds with the advent of ChatGPT, machine learning and how these new technologies will shape the challenges in cybersecurity going forward.
LISTEN HERE USING THE PLAYER BELOW OR SUBSCRIBE THROUGH YOUR FAVORITE PODCASTING SERVICE.
Podcast: Play in new window | Download (Duration: 54:26 — 37.4MB)
Subscribe: Apple Podcasts | Spotify | Android | Pandora | iHeartRadio | Podchaser | Podcast Index | Email | TuneIn | RSS
TOC
- A Unique Role
- The Light Bulb Moment
- Regional Ransomware Effects
- Keys to Communicating with Clinicians
- The Importance of Keeping it Real
- Educate, Don’t Order
- Wading into the AI Pool
- Attacking Integrity
- Social Engineering Going Strong
- AI on the Vendor Side
- Evidence-Based Cyber
Anthony: Christian and Scott, thanks for joining me.
Christian: Happy to be here. Thank you for the opportunity.
Scott: Thank you, Anthony.
Anthony: All right, great. I usually ask you about your organization and roles. Christian, I’m going put you on the spot first. You’ve been there longer. So can you explain about the organization a little bit, and your role? And then Scott, tell us a little bit about your role.
Christian: UC San Diego health is a large academic quaternary healthcare delivery organization in San Diego, California. We operate a myriad of clinics out in the community, as well as two larger hospitals. One in La Jolla, California and one in downtown San Diego. We have a very large organization, lots of employees, but we also are closely integrated with our academic counterpart across the street in La Jolla, the University of California, San Diego. And so we have a large undergraduate and graduate population, and quite an impressive amount of engineering students and others that all play a part in this conversation moving forward. So we’re a large academic healthcare center. And generally speaking, I think we’re very much forward thinking. And as I think we’ll get into later on in this discussion, we’re looking to shake things up. That’s our brand and our excitement around that area of innovation.
Anthony: And Christian, you have an interesting role, which I don’t think a lot of organizations have, and maybe today we’re going to learn why they might want to think about it. But tell me a little bit about your role.
Christian: So I’m an ER doc by training. So most of the time, when I’m at work, you’ll find me in the trenches of the emergency departments. But I also serve an operational role here with Scott as the medical director of cybersecurity. It is a very unique role. In fact, it’s the first one in the nation, to our knowledge, which really put a clinician – someone like a nurse or a doctor, someone who takes care of patients – in a cybersecurity role, where I help straddle, if you will, the intersection of clinical care and cybersecurity. So what I like to say is, it’s like where bits and bytes meet flesh and blood.
What are the implications of cybersecurity risk, not just to patient data, but also to patient well-being, patient safety. And so, I work with Scott and the rest of our security team to try to help bridge those gaps, translate between those groups, as well as socialize and encourage the adoption of new security controls. So if we roll something out, traditionally speaking, many clinicians are resistant to new cybersecurity controls, they think it’s a drag on their workflows, whatever it may be. And so, often, I can go out as an advocate for certain new cybersecurity initiatives and help that particularly resisting group understand the importance of it and help with adoption. So that’s just one of several things I do here at UC San Diego. And I think every day, we’re learning more about ways that I can help the organization, it’s an amazing opportunity to work with someone like Scott, where we can every day just identify new ways that that collaboration can be useful. I’ll turn it over to Scott.
Anthony: Scott, a little bit about your role, please.
Scott: Yes, thanks very much. So I’m the Chief Information Security Officer, which really covers all aspects of our cybersecurity operations within UC San Diego Health. So between myself and my team, we’re really focused on ensuring that we’ve implemented all of the administrative and technical controls to ensure that we’re in a position to be able to deliver on our core mission, ultimately, which is patient care and healthcare delivery. So my team really looks at how do we get the technical cybersecurity controls in place to be able to manage the risks associated with breaches and ransomware, and so forth. And we do that across our health campus.
And so we work, obviously, within our clinical areas, within our administrative areas, but also with a lot of the areas of our health system that are very focused on research and thinking about how do we build a cybersecurity strategy that allows us to be thoughtful about how we manage and mitigate the risks associated with ransomware and data breaches and all those things that you hear in the news – but also with a really sharp focus on making sure that we’re ultimately able to deliver on our primary purpose.
So what we don’t want to do is get in the way of our ability to deliver patient care or get in the way of our ability to deliver on our research mission. And so it’s fantastic for us from a cybersecurity perspective to have somebody like Christian on the team who can work with us really to understand not only what it is we need to do to support our clinicians, but also to think about how the things that we choose to do may impact our clinicians,so that we can build a model that allows us to be very agile and resilient from a cybersecurity perspective, but also ensures that ultimately we’re able to support our providers and our researchers and our faculty in being able to meet the mission that we have as an organization.
Anthony: So Christian, to me, your role makes me think of why the CMIO role was developed. It was for the CIOs to have an interpreter to help them communicate with the clinicians. Well, now cyber is so important, it’s almost like your role was developed so the CISO has an interpreter to deal with the clinicians – to have someone who walks in their shoes, has the credibility. Both positions are about credibility with the clinicians, especially since you practice, right?
Christian: I definitely think that they share a lot in their overlap. The CMIO is not only an important interpreter between the CIO and the clinicians, but because they possess that clinical knowledge, that street cred, if you will, they’re actually taking care of patients, they’re able to uniquely identify risks and opportunities that the CIO might not because they don’t understand the clinical workflow. So it’s not always just about translating those two, it’s also about identifying opportunities, and identifying risk. Typically speaking, most CMIOs are born out of a training pathway in clinical informatics. They’re used the electronic health records, they’re used to PACS imaging systems, etc. And really what they bring to the table is a little bit of technical chops and data chops.
I see this role as a medical director of cybersecurity not only as a translator, not only as an identifier of risk and opportunity, but also bringing to the table on the cyber technical side a set of skills that is born out of research. And this is something that Scott mentioned, we have a big research focus here at our institution. There are tools that doctors are trained in, whether it be through their undergraduate education or their medical school training, which teaches them how to look at problems and develop experiments, collect data, analyze these types of things to a pretty significant rigor. I think one of the great opportunities in this role is that I’m able to take that paradigm and apply it towards operational cybersecurity and say, “Why are we making this decision versus this decision? Can we collect data, analyze it critically, and make a better informed decision about how we roll out a new security control, how we’re allocating certain resources?”
So it’s a translator, as well as bringing a new set of skills to a domain, which may or may not be represented at your organization, a lot of other places don’t have something like that. So I really have seen it as a nice primordial ooze if you will. There’s new elements in this collaboration. And every day, we see something new come out of it.
Anthony: Scott, it’s super interesting what Christian said about identifying cyber risks from his unique perspective. He’s in a position to see cyber risks that you might never see.
Scott: Yes, I think it’s tremendously valuable. As a CISO, I’ve worked in academic healthcare for years now. But that work in academic healthcare never really gives you that true insight into the day-to-day of a clinical workflow. You can spend time with the clinicians, you can do rotating shifts on various clinical wards to try and understand a little bit better what’s happening. But, you never really have that true in-the-trenches insight into what’s going on from a clinical perspective. And the more systems that we have that become interconnected and network connected, the more those risks to clinical care are really only going to come out of getting the insight of what the physicians are seeing on a day to day basis, and what the impact of a cybersecurity event could have on our patients and the people that are under our care. So I think it’s really tremendously valuable to be able to have that clinical insight, but with that cybersecurity overlay that allows me and my team to have insight into clinical operations in a way that would be much more difficult for us to have.
And certainly over the years, I’ve worked with a lot of CMIOs, and to varying degrees, they’ve been successful in helping to do some of that translation that you talked about. But I think really having a physician with a true understanding of cybersecurity and cybersecurity risk that we can collaborate with to try and understand how do we move our organization forward is extraordinarily valuable. And it’s really important for us to think through how we can take advantage of the two roles that we have from a security operations and a clinical cybersecurity expertise, that creates new opportunities to think about cybersecurity in different ways that I think are unique to what we have here at UC San Diego Health.
Anthony: Yes and it’s not only about identifying risks from your unique angle, but also suggesting the best ways to roll out controls. So Scott says, “Hey, we’ve got to put in multifactor authentication, or we’ve got to roll this stuff out.” And Scott may have a particular idea about the best way to roll it out. And he runs it by you. And you say, “Well, that’s not going to work with these folks. Here’s how we want to do it.”
Christian: I completely agree. I will just say Scott’s already pretty fantastic at the messaging side of this. So I don’t have much to add on that point. But I will say, the secret sauce, I think, in this role is not only how do you message it, but how do you roll it out? And how do you make sure at the end of the day it’s done in the least obtrusive way? But it’s also in conveying the importance of the message. And one of the cool things that we’ve been able to do here – Scott and I – is translate what this particular cybersecurity risk means to patient care. This is the light bulb moment.
If you go to a clinician who every year has to do a manual, required cybersecurity training, they may get some phishing education, or whatever it’s going to be. And every time they think about cybersecurity, they think burden, it’s just another required training. It doesn’t mean anything to them, it doesn’t really help them in their day-to-day mission of taking care of patients. If you approach it in that same way you’re going to fail, you’re going to have that resistance, you’re not going to be able to pull out new programs. If you want to improve the cybersecurity of your medical devices, for example, you’re not going to get clinical buy-in if you approach this as a data security, HIPAA, same old conversation.
Instead, what we’ve done here is, “Let’s talk about ransomware. As an example, doctor, you’re taking care of this patient, you need this critical service, this critical medical device to take care of that patient right in front of you. And that patient has a bleed in their brain. And if you don’t have a CT scanner, you can’t treat them, and not treating that patient, that patient is probably going to die, that patient is probably not going to be able to walk again, there’s going to be some type of adverse patient outcome if this particular cyberattacks is successful. This mitigation, this control will reduce our risk. At the end of the day, make the patient you’re taking care of safer and make you less susceptible to liability and other concerns.”
When you translate that risk from the old data security paradigm to patient safety and do it in a convincing way speaking their language with realistic scenarios, that’s a light bulb moment. Clinicians turn from fighting cyber mitigations to advocating in their own departments. These are no longer just burdens that don’t mean anything. We’re really so connected and so dependent on it. It’s a requirement now and it’s a patient safety issue. “I’m on board.” That translation has been really important. And it’s really been the secret sauce. I think a lot of ways we have to speak in the clinicians’ language of patient safety and not be too sensationalized, not use scenarios that are just out of this world. You really have to speak the language; use something realistic. They become fans of your work almost overnight.
Scott: And I couldn’t agree more with that. I think really taking the conversation from being what has traditionally been an IT conversation about firewalls, antivirus and things like that, and changing that way we discuss these risks, into knowledge. It’s not a technology conversation, it’s an organizational risk. It’s an enterprise risk. And it has a direct and measurable impact on our ability to deliver patient care, and therefore, on our ability to keep our patients safe.
And I think that’s really the way we need to be having conversations in cybersecurity and healthcare. It’s not about the technology, but really all about the impact of these major events on us as clinicians, on us as a healthcare delivery organization. And when you start having conversations in that context, as Christian said, you can see people’s eyes light up, and the realization comes that, okay, this is not just something that’s an IT problem, this is an all-of-us problem, and how can we participate in trying to make the organization safer.
And so I think it’s really important that those conversations are happening. And I think that’s something that we do really well – is making sure that messaging is there – those conversations happen. Christian can bring that credibility that you mentioned earlier. Because he is a physician, he understands their problems, but he also can understand the impact that cybersecurity can have on patient care and can evangelize that for the rest of the organization at the physician and the provider level.
Anthony: So you mentioned ransomware. We talked about getting through to clinicians and the light bulb moment, connecting it to patient care. We need to do that in an authentic way. We know it’s authentic because we understand the connection. Christian, you were involved – with Christopher Longhurst, who’s your CIO, correct? – with the study on ransomware that showed there is a regional effect to a ransomware taking out a particular hospital. There is a cascading larger effect on a larger geography.
Christian: Thank you so much. And Scott was a part of that as well. If one of our hospitals gets ransomed – not our individual hospitals, but one of our collective hospitals in the United States, gets ransomed, if you want to go in and ask what happened and collect data, and try to share that, it’s really hard. There are legal concerns. There is our inability to measure things because the way you measure things inherently is electronic. What I mean by that is we measure patient safety using tools like the electronic health record, but when you’re ransomed often those things aren’t operational. So there are legal concerns why people don’t share information. There are technical concerns; we don’t have the data anymore because the system that we used to collect it was ransomed. There’s a few other reasons why it’s really hard to measure what happens at a hospital that was impacted specifically.
Our study tried to do the second-best thing. It measured what happened in the periphery, what happened to the hospitals around hospitals that were ransomed. And we saw some pretty significant impacts. In our emergency department, we saw a lot more patients, we saw a lot more ambulances, you saw a lot more patients undergoing strokes, a lot more of them came to our hospital because they couldn’t go to those other hospitals that were ransomed. And so I agree. This study was designed to do the second best thing, which is measure what happens in the periphery. And we saw some significant impacts.
What does that mean for us moving forward in the healthcare cybersecurity world is that you can no longer just focus on your resiliency, you’re only as safe and you only are as secure as your region around you. And you might do a great job, but maybe your partner doesn’t or maybe you’re not doing so well. You work in a symbiotic geographic location, and that spillover effect can be significant. I would argue that we should really be working together not just at our own organizations but building geographic resiliency. We need to be talking about healthcare in regions and what would essentially happen to the load balance if these hospitals were attacked, and these ones weren’t. How do we prioritize care for really sick patients like those with strokes and traumas and heart attacks? Because at the end of the day, if we don’t take that preparatory step to build out those networks of collaboration, cooperation, real time information sharing, what we’ll have is people won’t stop getting sick just because there’s a ransomware attack in town. They’re just going to overwhelm the hospitals that aren’t on diversion, who aren’t being attacked themselves. And that can essentially be the same thing.
Anthony: So Scott, I can’t imagine there’s much you can do in terms of getting your partners around town to take care of their cybersecurity a little better. I don’t know if CISOs from different organizations in a region are working together.
Scott: Yes, so I mean, it’s interesting. I think that for years in the CISO community there has always been this assumption that there was going to be regional impacts. When ransomware happened in organizations I’ve been with in the past, we talk to our colleagues that have had a ransomware event. And we could see that there was some change in our regional preparedness in terms of the number of patients coming in. But it was always anecdotal. And it was always assumed.
I think one of the really interesting things that we’re able to do here by bringing that research focus to some cybersecurity problems based on real data, is to truly create some evidence that says it does need to be much more regional. And so I think, historically, within any given region, CISOs would certainly talk to one another, and we’d have conversations, and we’d talk a little bit about what we’re doing and some of the threats that we’re seeing, but it was largely informal. I think what this study is starting to demonstrate is that we need to go a little bit beyond that informal collaboration and start thinking through, are there really meaningful regional preparedness pieces of work that we need to do in order to prepare us all?
And so I think the opportunity is there for us to start doing more in this way. And I think the fact that we now have some true data that outlines what this looks like within a region when ransomware happens, is going to make it so it’s much easier for us to begin to have those conversations. Certainly, there are challenges that will still exist in terms of how much do we share with our colleagues and competitors? How much data can we give them? Can we really talk truly about our threats and vulnerabilities? But I think we need to look for ways to move a little bit beyond that. And that may not necessarily get to full collaboration within a region. But I think if we start talking about how do we collectively prepare for something that might happen, I think it gives us all a much better chance to be ready when something does happen. And I think that’s ultimately the message: a lot of this stuff is no longer hypothetical. This is real. It happens all over the country. It feels like every day that somebody has a ransomware. And we can now see what happens when that occurs. And we need to start really talking about how to fix that.
In previous organizations I’ve worked with, we’ve started down that path. And we’ve started building some regional security capabilities. And so it is possible. And I think now we really have the ability to say, ‘Here’s why we need to do it.’ And I think that’s going to open the door for a lot of us to start working more closely together.
Anthony: The idea of competition versus collaboration is really interesting, because I can envision a day when cyber is advertised by a health system as a competitive advantage.
Scott: Yes, it’s an interesting challenge, for sure. And I think it’s one where we do have to walk a bit of a fine line because I do think cybersecurity is going to increasingly become something that’s talked about from, not necessarily a marketing perspective, but in terms of a capability within a health system. And certainly from UCSD health perspective, we’re really trying to set ourselves up as leaders in this space for sure because what we see is obviously it’s good for us and our patients.
But I think more importantly, how can we contribute to healthcare in the nation as a whole? And I think that’s what comes out of some of the research that Christian has been doing. How do we learn from these events and give back to the healthcare community? And so I think you can, to some degree, have it a little bit both ways, with walking that very fine line where we can say, “Listen, we’re really at the leading edge of cybersecurity. And because of that, we really want to take advantage of our knowledge and work with our peers and our partners in the region to try and help us all be better.” And I think if we can manage that conversation, I think it’s really, ultimately to the benefit of everybody.
Now, does that mean that we’re going to get there by ourselves? Not necessarily, I think we need to start thinking about how regional, state and federal government can play a role in building out some of this infrastructure to try and create better regional resilience. But I think we can start having the conversation now in a much more meaningful way. And I think there’s opportunities for us to evolve this in a way that is really good for all of us.
Keys to Communicating with Clinicians
Anthony: Excellent, excellent. Christian, I want to go back a little bit to that translator role. And I would like your best advice to security professionals in terms of communicating with clinicians.
Christian: That’s a great question. I think there’s a lot we could do here. And Scott, please feel free to chime in. I would say it’s not always about seniority. You have to know your organization. For example, academic medicine departments tend to be very powerful little areas. They are headed at the very top of them by a chair. And so the first piece of advice I’d give you is to understand how your organization is structured and where the power and decision makers lie.
If you’re at a community hospital, departments are going to be much less powerful. And so if you need to make some big organizational change, and you’re going every department by department, you’re probably going to spin your gears for a long period of time. You’re not in a position to be able to make any effective change at that level. Now, you may, at the end of that, have to do some socialization and win over a few hearts and minds. But for the most part, if you’re a community shop, your department’s probably not going to be that strong, you’ll need to go higher up the food chain of clinical leadership. I would approach someone like the CMO, if you have a medically oriented CIO possibly that.
If you’re an academic medical center, where departments are very strong, the second piece of advice I would give is, you should not always go straight to the top of the departments. If you think, “Hey, I need to go talk to the emergency department about their medical devices and how critical their CT scanners are for trauma, and there’s an issue with a particular known vulnerability in this particular medical device,” – if you go straight to the chair, you’re unlikely to get a meaningful response. And so what I would do is find out who in the particular department would be most susceptible to your message. What I mean by that is, usually go for the more technologically oriented younger leadership, someone who can understand what you’re talking about, and it doesn’t necessarily always have to be at the chair or vice chair level.
So that’s my advice. Understand your shop, know where the power structures are, you don’t always have to go to the top, find a sympathetic ear on the clinical site. Are those out there? There are more and more people out there now practicing medicine with the technology. I jokingly say, I’ve never used a paper chart. I’m just of the generation of doctors that have never used paper charts. Until six months into my fellowship; I had never handwritten a prescription. As more and more of these younger, more technologically oriented, more familiar technological doctors come up, that message of cybersecurity as patient safety is going to find more allies.
The Importance of Keeping it Real
The third thing I would say is that if you sensationalize it, if you pick some crazy scenario to try to communicate the cyberrisk, it’s going to turn people off. That’s one thing that clinicians generally do not tolerate, is that if you say, here’s this crazy scenario where someone’s going to die of a pacemaker hack, and by the way, I need you to do all this work rolling out MFA because I don’t want this person to die of this pacemaker hack, and you’re talking to a cardiologist, that’s a losing conversation. So be careful when talking about patient safety impacts, you really have to speak with some legitimacy in that. There are some resources available. There are some videos of some clinical simulations that we do here at UC San Diego that are on YouTube, those have all been vetted to be very medically accurate, as well as cyber accurate, and not too powerful.
A one two punch in a video can win some hearts and minds. So don’t make the mistake of going really big and saying everything’s going to be catastrophic, and using unrealistic medical terms, because you’ll turn off that community pretty quickly.
Scott: I completely agree with everything Christian said, but I think the last thing is, within healthcare, that’s always going to be a losing battle to go in telling people what they need to do. I think the key really is in how we build collaboration with our clinicians to involve them in the decision-making process of what it is we need to do and why we need to do it. So I think patient safety is the opener to that conversation, which is, “Hey, here’s a risk.” As Christian said, “It’s a realistic risk. We’ve seen this, we know this happens, we need to do something about it, let’s work together to figure out the best way to do it,” and talk through what that means to them and how it can be implemented and involve them in the decision making.
I think that is going to lead to a much more positive outcome. When you talk about MFA, it’s one thing to say you’re going to do MFA. And all they’re going to be thinking about is well, “This is just going to make my life harder.” Well, if we get into the decision-making process, involve them in it, talk about what that workflow is going to look like, talk about where the changes are, think about ways to mitigate the impact of that change, such as maybe not asking for your MFA challenge response every time you log in, but we do it a couple times through a shift to revalidate.
There’s ways we can approach this that allow us to collaborate with our stakeholders to have them buy into what we’re trying to do and the reason we’re doing it, but more importantly, buying into the way that we’re going to do it, so that the rollout then becomes much more supportive, as opposed to me saying you have to do this. It’s saying, “This is something we need to do as an organization. And here’s why it’s a good thing for us to do.” So I think that, to me, is really, really important to make sure that the collaboration is at the forefront of all decision making.
Anthony: Excellent, very good. Scott, I have heard that your organization is doing quite a bit around AI and chatGPT, and things like that. Is there anything you could tell us about what’s going on there, and obviously the security angle of it. Ironically, a lot of CIOs are working to lower expectations because some of this stuff may not be ready for primetime. What are your thoughts?
Scott: I think there’s a lot of really interesting work happening in the AI space within healthcare. And obviously, from a marketing perspective, the companies that are building out these AI models are selling a dream about how amazing this can be across healthcare. And I think there is a certain amount of tempering of expectations. But there’s also an opportunity there that we think we can get involved in.
And so, we are starting to think about how we pilot different use cases for AI within our healthcare organization. And, as you say, there’s a lot to be concerned about when it comes to AI. We’re having to build some guidelines around how our organization interacts with those large language models and generative AI to make sure that we’re really using them in the most appropriate way. And so what we’re really trying to do, as opposed to going full into GPT, and the public space, is how do we think about building something that can be private and secured and within our infrastructure, but that leverages the capabilities of these large language models.
And so that’s really what we’re piloting is working with Microsoft and other partners to bring those AI models in-house into infrastructure that we understand and control and can secure, making sure that all of our security controls are built around that, but then allowing those models to interact with our EHR in a way that allows us to start to get some value out of this. And really what we’re trying to do is look at some small discrete use cases and see how this is going to work in those use case. And really prove that out and then think, okay, how do we now move this to the next use case, and the next use case, as opposed to trying to say, AI is a big bang, we’re just going to invest heavily. And I think that’s a mistake, because it’s very easy to get overwhelmed by all of the possibilities of AI and, as you say, the organizational expectations around it.
But if we can take baby steps and say, here’s how we prove it, here’s how we secure it, here’s how we can demonstrate that there’s value in these AI models – clinical value – then I think that allows us to start getting more and more excitement building around it. And I can tell you, there is a lot of excitement within the UC system and UC San Diego in particular around, how can some of these AI models help with physician burnout? How can these AI models help with our ability to communicate better with our patients?
And throughout, we’re thinking about all of the security that needs to be in place and making sure that we’re working with our vendors, we’re evaluating the security models, we’re following all of our best practices around how we assess new technologies. And we’re really just doing it in a way that allows us to make these small investments in AI, secure those small investments in AI, learn about how it works for us, and then begin to grow it over time. But there’s a ton of opportunity, and we’re seeing that opportunity. And we’re looking forward to seeing what AI can do for us as an organization.
Anthony: Very good. Christian, your thoughts around AI? People might be asking you doctor to doctor, am I going to lose my job?
Christian: I think there are some significant implications for a lot of what doctors do currently and how AI will impact that. I won’t forecast much on that, because I think it ends up just being one specialty of medicine beating up on another. Everyone’s saying radiologists are going to become extinct. I really don’t think that’s likely the case in the short term. But there’s clearly some of what doctors do every day that can in the very near term be replaced with what AI is promising. And I agree, I want to echo Scott’s sentiments that we should be really careful because they are really selling the dream; we need to see these things at scale. And what I’m really excited about and hopeful of is that the data will actually show clinical efficacy and improvement.
But I’m going to take a sidestep for a minute and say I grew up really in the hacker space. And so I often want to think about things in an adversarial way. And some of the early research I did about five years ago, we were looking at men in the middle attacks for laboratory values. So we built a tool that would make a patient look like they had a particular disease when they didn’t by changing laboratory values that were coming out of medical devices before it hit the laboratory information system. What we’ve always talked about confidentiality attacks in healthcare and the breaches, we’ve always talked about availability attacks in healthcare, with ransomware and Distributed Denial-of-Service (DDoS) Attacks – but the forgotten part has been this integrity attack issue. And we haven’t really spent that much time with it.
I think AI is honestly one of the superchargers for a practical integrity attack on healthcare. Let me give you an example. If you’re going to try to change someone’s dosing of a particular medication to cause some type of harm or chaos, you need clinical insight to make that believable. If you change an order in a computer to say instead of giving a patient four milligrams of morphine, which is a normal dose, to give them 400 milligrams of morphine, the nurse says, I’m never going to give 40 milligrams of morphine, even if the computer says it’s right. So how do you craft believable changes in laboratory values and medications, and cause some type of chaos or issue as an integrity attack in healthcare? I think AI is not only the basis for doing that in a believable way and in an incremental way, but in a scalable way. The only way you’re going to be able to perform a hospital wide integrity attack that’s believable is by leveraging something like AI. And so I’m terrified, a little bit about leveraging that type of thing in an adversarial way, when we apply it towards integrity attacks where we’re changing lab values, radiology images, we’re changing medication doses. So I think that’s really going to probably kickstart the next generation of healthcare cybersecurity researchers – AI plus integrity attacks.
Anthony: So you can almost envision the query, the chatGPT query that would deliver those numbers to the bad actor. So they’re crafting phishing emails that no longer are obviously phony. And they would be using it almost the same way. Is that right, Christian?
Christian: Yes, exactly. “ChatGPT, make these laboratory results that show a patient developing diabetic ketoacidosis,” you as an attacker may not have any idea what the sodium and the potassium and with the pH is supposed to be, to be realistic. But ChatbotGPT could probably figure that out. And if you ask it that, all of a sudden you take an attacker who knows nothing about healthcare and makes very unconvincing integrity attacks and give them all the information they need from a doctor to be on their team to make these attacks meaningful and particularly convincing for physicians to act on and potentially give too much medication, medication the patient didn’t need, etc,. So I agree, they can leverage that information to fill gaps in their clinical knowledge to make it really more of an effective attack.
Social Engineering Going Strong
Anthony: Scott in order to do this, they have to be in the network, and then they can start screwing around. If they’re not in, they can’t do it. Are they mostly getting in by pure hacking or are they fooling people to give up their credentials. There are generally two ways, right? With human help and without?
Scott: Yes, there are. And that’s historically true. But I think what we’re seeing is that mix change a little bit over time. I think what we are seeing right now is really much more in the social engineering type attacks, which are the primary vector that the bad guys are getting in. And I think to the point that we talked about earlier that Christian mentioned on the AI front is that’s increasingly a concern. How do we arm our users with enough knowledge to be able to recognize those phishing attacks? When we look at the statistics, in terms of the number of attacks that we block on a daily basis, it’s staggering the number of phishes that come in and we have great technology that helps to manage that. But it doesn’t catch everything.
And so, increasingly, we’re looking at how do we improve our training and awareness for our users so that we can manage the risk around those phishing attacks. Now, it’s not to say that we don’t have other types of technology risks, whether it’s the medical device risks, what we’ve seen over the last couple of years, the supply chain risk, software providers being hacked and embedding malware into otherwise legitimate packages. I mean, all of those things continue to exist, and really needs to be part of our overall strategy for cybersecurity. But the vast majority of the breaches that we’ve seen, and the vast majority of the threats that we see, are really coming through those end user attacks, whether it’s general phishing or spear phishing, and I think spear phishing is one that’s going to be increasing.
As more and more of these AI capabilities come, bad actors can start to leverage something like the ChatbotGPT and publicly available information and databases to really target individuals in a way that we haven’t been able to see before. So I think it’s really going to be critical that we start to think about arming our users with enough knowledge to be able to have less trust in a lot of these emails that are coming in. But again, it’s a very fine line to walk because we also need to make sure that people are able to communicate and collaborate. So it’s an interesting shift that we’re seeing. And I think it’s going to shift even more with the use of AI tools towards that spear phishing attack, but we still have to think about those technology challenges and build our strategy that ensures we can address all of those different facets of cyber risk.
Anthony: Yes, and we’re hoping that the cybersecurity vendors are able to supercharge their tools with AI, that it’s not just used by the bad guys, but it should be used by both teams. But that’s going to be on the vendor side, right?
Scott: Yes. And we’re starting to hear a lot of talk about that in the vendor space, which is how can they start using a lot of these AI capabilities and, over the years, you can see some of the vendors dipping their toes in a little bit and talking about ML models and AI models. But I think even just in the last six months that that conversation is ramping up with the explosion of ChatbotGPT into the scene and what that’s done for everybody in the world thinking about these new models. I think vendors are going to have to take that step. I think if there’s vendors that are not investing in AI and how AI can work within the product suite, those are the ones that are going to get left behind, because it’s too powerful, and the adversaries are using it. And so if we’re not using it, we’re going to find ourselves in a very bad position.
Anthony: So Scott, you need to be hearing this from your vendors, right? It’s important to you, as the CISO, to be hearing innovation, AI, from your vendors, from your cyber vendors, like “Guys, I need to know you’re driving just as hard as the bad guys.” Right?
Scott: Yes, and I think it’s hearing it, but also seeing it. It’s one thing to have somebody come in with a sales pitch saying, “Hey, we’ve got an AI-powered this and an AI-powered that,” but I need to be able to see how that AI is going to actually contribute to improving my cyber defense. And I need to understand how their models are going to help defend against and the counter AI.
(As an industry), we are getting into this space now where we’re not necessarily just trying to defend against human adversaries and the ransomware that they write, or the phishes that they write with ChatbotGPT able to generate code, we can start seeing novel types of ransomware being written at a pace that would be unheard of. And so how do we keep up with that? How do we have the vendors that we rely on help us build our technology capabilities and cyber? How do they stay ahead, or at least keep up with, the adversaries that are definitely going to be investing the time and the effort into learning these tools and using these tools in an offensive way? We need to be using the same tools in a defensive way, not only in terms of looking at telemetry and being able to extract value out of that telemetry, but also to build AI into a lot of our detection capabilities and be able to be much nimbler about how we react to potential behavioral changes in endpoints. And so there’s a whole host of different capabilities that I think are going to need to be developed and evolved. And I absolutely need to be hearing it and seeing it from my vendors to make sure that we’re staying ahead of these risks and issues.
Anthony: So Christian, we talked about how a lot of these attacks are coming in through social engineering and ChatbotGPT, and things like that being used by the bad guys are really supercharging those phishing emails and making them much more believable, much more convincing. What’s your communication been to the clinicians about this risk – to keep them up to speed on what’s going on?
Christian: Great question. I think that, to answer that question, I want to just harken back to one of the great strengths we discussed – the collaboration between a clinician and the cybersecurity team. And so one of the things that has always puzzled me is how we make a lot of decisions in cybersecurity with little or low quality evidence. We always think that training is a cornerstone of operational resiliency, right? We think, “Hey, as long as we tell people that they’re getting phished, they’re going to become better at not getting phished.” And when you ask them, “Well, show me where the proof of that is,” it’s often, “Oh, this is expert opinion, or this is anecdotal, or this is what we saw at our shop.” And it reminds me of what we did in medicine, 50 or 60 years ago, where your doctor would give you this medicine, but it wasn’t necessarily based on evidence, it was based on their opinion. The degree to which, with certainty, we can make decisions really has matured in medicine in the last 50, 60 years. Now, we’re far more evidence-based.
To answer your question about phishing, I think we really need to challenge our paradigm that says if you give them training, there’ll be better at it. I think we had better ask ourselves different questions like what is the best training to give them in what format? When to give it? Is it right after they fail a simulated phishing exercise? Is it right after they click on a real phishing email? Is it supposed to be required? What is the tone? These are all types of questions that are at the intersection of human psychology, cybersecurity and operations.
And you mentioned that clinicians, nurses, doctors, are generally quite busy. So if you just think that a training solution is going to help them and it’s going to improve the resiliency of your organization, really, I think we have a long way to go to proving that. And then also finding out what the ideal way to do this is. And so I’ll just say one of the things that we’re working on now at our shop, is to look into that test. What trainings work? What format? Does it need to be context specific? Does it need to be interactive? Is it better to be 15 seconds or is it better to be 30 minutes? These are things that we can test measure and, at the end of the day, publish and nurture. That gives not just our organization the best roadmap to phishing resilience, but hopefully can be applied to every sector. And the only way you do that is through rigorous research methodology. Unfortunately, it’s the only way that you can get that high quality data.
We’re an academic shop. We have these types of research chops; we have buy-in from the research side and the cyber side and the health side. And at the end of the day, what that’s leading to is this really cool initiative to test a lot of these training hypotheses and hopefully bring something to the rest of the country and rest of the world – this is the best way you train your users against phishing.
Anthony: Excellent, well, we’re almost out of time, I want to give each of you an opportunity for a final thought – what’s your best piece of advice for someone in your position at a comparable sized health system?
Scott: Ultimately it goes back to collaboration. I think it’s making sure that you’re not looking at security as something you tell people to do, but something that you work with your team to figure out how to do. And I think what I would say for other organizations of our size and complexity, while you may not have a medical director of cybersecurity, it’s absolutely going to be in your best interest to find those physician champions that you can work with and collaborate with and help you to spread the word about why cybersecurity is important. Because I think it’s going to be really, really challenging to be able to make meaningful change unless you have buy-in from your stakeholders. And the physician group is one of those critical stakeholders that can be very difficult to reach if you’re not speaking in a way that is really meaningful to them. So certainly, focusing on collaboration and finding the right partnership within the physician group is extremely critical. And I think if you can do that, if you can find that partner or several partners, in some cases, I think that’s going to allow you to be much, much more successful and able to roll out the meaningful cybersecurity change that organizations need to be doing these days.
Anthony: Excellent. Christian, your final thought?
Christian: I guess I’d be speaking to other clinicians. I’ll say this: if you’re a clinician and you’re watching this, my sincere plea to you is if you have any interest in cyber and you feel impostor syndrome, like you don’t know what all these technical terms mean, or you don’t have a certification like a CISSP, don’t worry, by showing up and sharing your clinical expertise with the cyber team and working together, you’re going to pick up a lot. You don’t have to have been a hacker before. You don’t have to be a separate cybersecurity professional before you can pick up some of this and still be a very effective contributor to your organization’s cybersecurity. And so my plea to you would be to do that.
And then secondly, there are a lot of benefits to doing so. Cybersecurity is a very marketable skill, and as healthcare continues to struggle in many ways and clinicians have burnout and other things that can really make your career particularly challenging, diversifying it with cybersecurity is a winning strategy. It’s a fulfilling area. It’s intellectually challenging. It’s very important. It’s exceptionally marketable. So clinicians out there watching this, you should convince your organization that you need to be their next medical director of cybersecurity. And don’t worry, you don’t have to have all the cyber chops up front. You’ll get there, and you’ll be exceptionally valuable to your organization.
Anthony: Well, I have no doubt that every CISO out there would love to have someone like you, Christian, in your position on their team, so to speak. So I would be very surprised if I don’t see more of these positions popping up at health systems around the country. Christian, Scott amazing conversation. I want to thank you so much for your time today.
Scott: Thank you, Anthony very much. Appreciate it.
Share Your Thoughts
You must be logged in to post a comment.