When it comes to cybersecurity events, largescale ransomware attacks have traditionally dominated the headlines. Now, however, leaders are paying more attention to inside jobs, and for good reason. According to the Healthcare Data Breach Statistics report from the HIPAA Journal, unauthorized access and disclosure incidents are becoming increasingly difficult to manage. While the number of reported incidents in 2022 was at its lowest in seven years, the number of records compromised saw a significant spike, topping out at more than 7 million.
It’s enough to make anyone take notice, said Jesse Fasolo (Director of Technology Infrastructure & Cyber Security, Information Security Officer, St. Joseph Health) during a recent panel discussion. “People are looking at data they’re not supposed to. They’re obtaining or exfiltrating data, and it could be any internal employee.”
For leaders, it creates an extremely difficult situation — one that has to be managed delicately, quickly, and in cooperation with other stakeholders, noted Greg Garneau, CISO, Marshfield Clinic Health System. “We spend a lot of time working with tools to identify anomalous activity as it relates to access, and working with our partners within the health system. That’s the most important thing because you can’t be siloed. You all have to be playing out the same sheet of music in order to respond to events like this.”
During the conversation, Fasolo and Garneau spoke with Nick Culbertson, Co-Founder and CEO at Protenus, about the strategies they’ve adopted to help deal with insider privacy breaches.
The technology piece
Not surprisingly, the most effective approach includes a combination of people, processes, and technology, starting with the latter. Perhaps the most significant challenge is detecting unauthorized or inappropriate viewing of patient records, especially given the “wide span of entry points to gain access to these environments and to the data,” said Fasolo.
And while most organizations would like to be able to continuously monitor access to each and every patient record, it simply isn’t realistic. What often ends up happening, according to Culbertson, is that security teams focus their energy on mitigating serious risks. The problem with that tradeoff, however, is that most incidents don’t happen out of the blue. “If you look at an individual’s behavior retrospectively, you see that they did some benign things and built on them,” he noted. “They test the system,” realizing that low-risk incidents are far less likely to be investigated.
But that’s where the real threat lies, he said, noting that Protenus’ Protect Patient Privacy solution leverages artificial intelligence to audit “every access to every record, every day. And so, you get the balance of both worlds where you can automate the heavy lifting of monitoring, while also focusing on higher-risk incidents.” And although AI’s potential is still unproven in some aspects of healthcare, “privacy and compliance is an area where we have well-structured data and audit logs and very clear policies,” Culbertson stated. “It’s a unique aspect where you can get highly trained machine learning classifiers to be very predictive and helpful in these complicated investigations.”
Processes and people
And while in many industries, the simple answer would be to implement role-based access controls, doing so in healthcare could put patient care at risk by delaying caregivers’ access to critical information. “The nuances of healthcare make this much more complicated,” said Culbertson. Because data is “unfettered,” much more collaboration is required to determine whether an individual needs to access a record that may fall outside of the rules set — or, in fact, if it was viewed inappropriately.
“There’s a fine line between malicious, criminal activity and accidental access,” said Fasolo. “My focus is on preventing malicious activity,” which his team has done by establishing trigger points within the EHR to send out alerts for suspicious activities, such as multiple records being accessed within minutes. When that happens, leaders must go through standard operating procedures to ascertain whether it was intentional or not.
To that end, Protenus’ solution utilizes a “machine learning classifier that turns privacy investigations backwards by asking every possible question you could ask in a manual audit,” said Culbertson. It then generates a suspicion score that enables security specialists to zero in on questionable behaviors and gain the information needed to start the investigation.
“It’s difficult,” he said. “That’s why we’ve optimized our AI to consider so many different factors. We’ve optimized it for specificity rather than sensitivity, which means it’s more likely to deduce whether there was an actual violation.”
And that, in many ways, is only the beginning, Culbertson added. “It’s one thing to detect a breach. It’s another to collaborate across the organization in order to resolve that and operationalize your policies with all the different constituents in order to maintain patient privacy holistically.”
Line of questioning
One of the determining factors in achieving that goal is in being able to find answers as to why an individual acted in a certain way, according to Fasolo and Garneau. And that doesn’t happen by hurling accusations at busy clinicians. Rather, the interview process must be handled delicately and precisely, they said, sharing the following pieces of advice:
- Get your ducks in a row. “Physicians are so busy and so focused on patient care. If we’re going to come to them with information or questions about activity we’re seeing, we better be darn sure we have our ducks in a row,” said Garneau. “Do your analysis and do your investigations first.”
- Take it easy. Going in hard is a quick way to derail the conversation, he added. “You don’t want to go in and say, ‘you’re in trouble’ and wag the finger. That’s no way to do business.” The better approach is to say, “We saw a miss. What was the reason for it?” and to talk about what constitutes appropriate access. Fasolo agreed, adding, “It’s a sensitive conversation; not just with clinicians, but with anyone,” he said. His advice is to inform the person being questioned that their actions have triggered alerts, and ask them to explain it. “Let me understand how this is occurring. Maybe it’s a misunderstanding.”
- Pull in HR. If the initial assessment and investigation indicates malicious activity, Fasolo advised working with human resources to suspend the individual’s access and pull all the necessary records and information. “Go through all of it and have a collaborative discussion” that includes compliance, privacy, HR, and any other departments who should be involved. “The discussion needs to be open and transparent in order to figure out what happened.”
- Be armed with information. When an act has already been determined to be malicious, being ready with all the facts is particularly important, according to Culbertson. “It’s important to have all the information at your fingertips so that if they say, ‘I walked away from my computer at that time and someone else accessed it,’ you can come back and say, ‘No, you didn’t. We’ve done the research,’” and provide specific data. “Being able to approach an investigation that way ensures that you get a clean resolution to the case, and you mitigate the risk as much as possible,” he said.
- Don’t wing it. Garneau concurred, adding that it’s important to get a statement in these types of situations, even if it’s contrary to what you know is the truth. That way, “if you have to bring in law enforcement, you already have that information,” he noted. “It’s a learned skill. You can’t go into this without having the information at your fingertips. Winging it doesn’t work.”
Neither does assuming all violations are intentional, said the panelists. In fact, in many cases they can be traced to a lack of education. “It’s a huge part of what we’re dealing with,” said Garneau. And that needs to go into your playbook with teams who are analyzing this information.”
One method that seems to work, according to Culbertson, is on-the-spot personalized education — similar to the tactics used in phishing education. By letting individuals know they committed a violation and will be monitored, it helps prevent future mistakes from occurring. This method, he noted, is “very effective at helping people understand that what they did isn’t acceptable. It’s a good way of reinforcing that.”
The earlier all of these pieces are in place — including solid relationships with executives across the organization — the better teams will be positioned to handle incidents, the panelists stated.
“One of the first things you need to do as a cybersecurity leader is to make sure you spend the requisite time partnering and collaborating with your privacy office, compliance office, and legal team to understand what to do when an event happens,” said Garneau. “The last thing you want do to is start planning on how to respond to an event after it has already happened.”
Culbertson agreed, adding that strategies need to be established from the jump. “It’s much more ideal to meet with a counterpart and say, ‘these are our policies; this is how we’re going to connect. This is how we work together. What’s your threshold? How do you operate?’” he said. “Instead of waiting for a bad thing to happen, write a policy that says, ‘we’ll do this,’ and ensure everyone understands it.” That way, it becomes procedure, and in the event of a breach, teams are able to act faster and more efficiently. “It may sound idyllic, but it is possible if you can be proactive and establish those relationships in advance.”
To view the archive of this webinar — Developing Optimal Policies & Relationships to Deal with Insider Privacy Breaches (Sponsored by Protenus) — please click here.