Erring on the side of caution is usually seen to be a good thing, but when doing so calls for shutting down critical systems that may not have been effected, the scales tip towards over-reaction, said Darren Lacey, former CISO of Johns Hopkins University and Johns Hopkins Medicine, during his presentation at the recently-held HIMSS Cyber Forum.
Lacey covered this topic, along with the benefits of leveraging AI to study logs for suspicious behavior, and overall changes cyber leaders must make in their strategies due the fast-paced changes around computing.
AI’s Amplification of Content – A New Challenge for Cybersecurity
Lacey began by highlighting a growing concern in AI: content amplification. As AI generates increasing volumes of content online, there is a corresponding rise in misinformation and disinformation. This “garbage in, garbage out” problem, he explained, could soon reach a point where discerning authentic information from AI-generated noise becomes an almost insurmountable task. In five years, he predicts that amplified misinformation will require rigorous attention from both cybersecurity and AI professionals.
“The amplification problem of garbage on the Internet is an issue that all of us in AI face, and there’s really no obvious solution,” he said. With the vast amount of information AI systems are producing, managing misinformation is becoming increasingly difficult. Lacey pointed out that while filtering content or editing the Internet is unrealistic, understanding how AI amplifies information is crucial for cybersecurity professionals.
A New Computing Paradigm: Non-Deterministic AI
Lacey argued that AI introduces a fundamentally different way of computing compared to traditional deterministic methods. AI relies on probabilistic algorithms, unlike the classic deterministic approach introduced by the Turing machine. If non-deterministic computing becomes the norm, Lacey suggested, it may mark the most significant technological shift since the invention of the computer.
He acknowledged that this new form of computing poses unique challenges for cybersecurity. Traditional approaches may prove inadequate in securing probabilistic systems, and cybersecurity experts must adapt to the nuances of AI-driven systems that learn and operate unpredictably.
Truth in AI: Understanding Semantics and Context
A recurring theme in Lacey’s talk was “truth” in AI-generated information. He described two kinds of truth: one that aligns with reality and another that adheres to a system’s intended function. Lacey explained that AI and machine learning models must not only generate accurate, reality-based data but also operate reliably in the face of attacks or adversarial manipulation.
To illustrate, Lacey used the concept of “transformers,” AI models that focus on the context of words and sentences to generate meaningful text. By embedding text into vector representations, AI models can capture semantics more effectively, enabling them to detect nuances and variations in language. This technology, which has applications in natural language processing (NLP), could be beneficial for cybersecurity professionals by improving their ability to filter and analyze data.
Leveraging AI for Cybersecurity: Practical Applications
Lacey shared insights from his own experience with AI applications in cybersecurity. One of his current focuses is using transformers and other machine learning models to analyze database logs and detect anomalies. He described how this approach allows him to classify database queries by comparing them to known patterns, identifying potentially suspicious activity in real-time.
While recognizing the limitations of this technology, Lacey argued that even low-powered models running on standard hardware can deliver meaningful results. For instance, by using transformers to measure the semantic similarity of database queries, security teams can identify unusual requests that could signal a breach or cyber threat. This approach, according to Lacey, offers a “target-rich environment” for applying AI in cybersecurity, particularly for tasks like data loss prevention and detecting anomalies in web logs.
Cybersecurity’s Expanding Role in the AI Era
Lacey argued that cybersecurity professionals should embrace a broader mission: safeguarding truth in an AI-influenced digital world. While traditional cybersecurity focuses on defending against adversaries, Lacey believes AI challenges professionals to validate non-deterministic systems and apply adversarial techniques to ensure system accuracy and reliability. In his view, cybersecurity teams have the skills and experience necessary to manage large datasets and test for adversarial attacks—capabilities that are increasingly important in the AI landscape.
The challenge, as he sees it, is whether cybersecurity professionals are willing to expand their focus to include these new responsibilities. He urged them to adapt, leveraging their knowledge of systems testing and adversarial defense to keep pace with AI-driven changes.
Rethinking System Resilience and Response to Breaches
Toward the end of his presentation, Lacey addressed the issue of resilience, urging cybersecurity leaders to reconsider what constitutes a breach and how systems should respond to attacks. He suggested that simply treating a breached system as “untrusted” and shutting it down may not be the most resilient approach.
“We’re denial-servicing ourselves to death,” he warned, noting that indiscriminately isolating compromised systems can hinder rather than enhance resilience. Instead, Lacey advocated for a more nuanced approach to resilience that includes realistic tabletop exercises and a focus on minimizing system disruption during attacks. In his view, a truly resilient cybersecurity strategy would not rely solely on containment; instead, it would enable organizations to continue operations even when a breach occurs.
The Evolving Mission of Cybersecurity
Lacey’s presentation underscored the evolving nature of cybersecurity in the AI era. He left his audience with a call to action, encouraging cybersecurity professionals to see their role as broader than merely defending against attacks. Instead, he urged them to actively shape AI practices by ensuring systems are trustworthy, capable of generating accurate data, and resilient in the face of adversarial challenges.
“We have our chance to shine,” Lacey concluded, “to lead our IT disciplines in ways they don’t even understand yet.”
Share Your Thoughts
You must be logged in to post a comment.