URL has been copied successfully!
CISOs should stop freaking out about attackers getting a boost from LLMs
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

A common refrain from cybersecurity professionals in recent years has been the need for a diversification of the CISO role to meet the demands of increased responsibility across numerous categories. In the past year, this refrain has grown louder, specifically around the topic of generative AI.Large language models (LLMs) have added a new dimension to the cybersecurity mission, the general argument goes, forcing the CISO to engage in partnerships and programming that would previously have been of lesser importance.This focus on generative AI is driven in no small part by analysis that concludes LLMs could be a boon for the capabilities of prospective attackers. The arguments are simple, recent years have seen such tools go from narrow use cases to general-purpose capacity for offensive behavior.They massively expand the scope and speed of possible compromise activity by allowing for attack surface analysis at scale, creating options for rapidly customizable intrusion kits, and augmenting underlying criminal infrastructures that drive threat dynamics. Even scholars of cyber warfare have argued that LLMs might let threat actors overcome traditional constraints to the creation of strategically meaningful cyber effects.It’s easy to see why CISOs are worried. If LLMs are all they’re chalked up to be, cyber operations could truly be as transformative for attackers as popular narratives about cybersecurity have always painted them.There’s just one problem: the AI revolution for cybercriminal operations hasn’t happened. While there’s much noise made about the use of LLMs for narrow tasks such as the augmentation of phishing schemes or adaptation of malware strains, there’s no evidence of major intrusion campaigns using sophisticated LLM techniques to gain a novel edge over defenders. Given that generative AI is now far from new, why might this be? There are obviously a number of possible reasons that the generative AI revolution for threat actors hasn’t been seen in the wild yet. There are startup costs, for one thing, along with skill development and the time required to produce new methods. It’s also possible that the cost of switching from established traditional methods is prohibitive for some attackers (though likely only the smallest).But none of these really hold up under the weight of basic facts about the accessibility and low cost of LLM usage.Instead, one should look at how the technological characteristics of generative AI interact with the very real organizational, economic, and political conditions in which attackers operate. The ability to automate basic activities such as penetration testing or vulnerability scanning is obviously immensely valuable. Efficiency is money, after all. But these routine tasks are generally distinct from those that give sophisticated intrusion activities their creative, dangerous edge.As any machine learning specialist will tell you, LLMs are bad at producing genuinely creative or inferential outputs. And importantly, this detail of the technology exists in direct proportion to the potential efficiency gains of autonomy and automation. As reliance on generative AI for automation of routine tasks increases, opportunities to use the technology to create outputs that effectively mimic or improve upon human techniques for compromise and exploitation diminish.

LLMs add uncertainty and unpredictability”¦ for the attacker

Taking this analytic reality one step further, CISOs need to realize that diminishing marginal returns for LLM usage given a focus on routine tasks means increased uncertainty for attackers and a disincentive to use generative AI for sophisticated intrusion activities.After all, expanding the scope of LLM usage to automate routine tasks means taking the human more and more out of the loop. This might be desirable when we’re talking about a large number of limited tasks with relatively low potential for misstep by the model. Nevertheless, the more LLMs are relied upon for foundational tasks, the higher the possibility for failure during direct engagement with target systems becomes.This amounts to an uncertainty gap between AI-infused and traditional hacker activities based on how much LLMs are relied upon for basic automation. Specifically, less cumulative human agency means that the failure of code or the potential for discovery becomes much more difficult to assess. This unpredictability is nothing short of additional risk, rising exponentially relative to reliance on LLMs for routine tasks, that attackers must square against the prospective gains of their intrusion activities.

LLMs motivate more focus on traditional attack techniques, not less

What’s the upshot of this dynamic? Simply put, the power of LLMs for so much routine cybersecurity offensive activity actually creates powerful disincentives for sophisticated AI-augmented intrusion activity. The added uncertainty of so much human agency removed from the operations loop is simply unlikely to square against the motivation of most attackers to maximize their opportunities for hacking gains.Cybercriminals, in particular, are thus only likely to benefit from LLMs where the entirety of their enterprise is bound by routine activities (e.g. the sale of disinformation or social engineering products). Ransomware gangs that seek to penetrate large organizations, by contrast, are far less likely to accept the risks of too much generative AI reliance.The irony of all this is that generative AI motivates sophisticated threat actors to double down on the established benefits of traditional techniques for exploitation, intrusion, and disruption. After all, those established approaches are tied to known risk payoff dynamics and thus are the only way that serious offensive cyber actors can avoid taking on the additional uncertainty tied to LLM usage.

CISOs: ignore the alarmism and live in the real world!

Amidst so much alarmist chatter about the potential threat of generative AI, it is of critical importance that CISOs ditch the hype and embrace a realistic view of how the new technology interacts with known conditions in the attacker-defender relationship. AI isn’t likely to see the realization of the offensive cybersecurity revolution so much as it is likely to see a gradual evolution of tools for both defenders and attackers to alter the minor details of their practice.Naturally, CISOs need to realize that this dynamic applies to the defender almost as much as it does to the attacker. Routine automation helps the defender more than it does the attacker. After all, the defender knows exactly what the full extent of the battlespace (i.e. the networks, personnel, etc.) is going to be in some hypothetical future intrusion event. But attempts to use LLMs for active defense or other tasks that require adaptive, creative inputs are likely to suffer from the same unpredictability as the attacker’s AI-augmented compromise activities.It’s worth asking where this alarmism comes from, of course. The most obvious answer could certainly be the right one, namely that generative AI’s clear technical potential has been combined with the complex tasks to which it can be applied to encourage regular speculation about what could be.But CISOs should also consider where the mentalities that make accepting the “LLMs will change everything” so easy come from. In no small part, it is the language of cybersecurity today and the use of operational models (from the kill chain on out) that primes us towards seeing technology development as part of a tit-for-tat escalation spiral between the offense and the defense. In reality, how a new technology impacts existing security dynamics is far more nuanced.All of this brings us back to the motivating context of the need to diversify the CISO position. It is absolutely true that the role needs to evolve in a number of ways, whether that be new forms of delegation of responsibility or a restructuring of the traditional models of security assurance in organizations. But the generative AI imperative is far less profound than so many pundits would have you believe.Experts are right to sound the alarm about LLMs. They are not right to talk about the technical details of the new technology without adding the real-world context that has always and will continue to determine security outcomes.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3625578/cisos-should-stop-freaking-out-about-attackers-getting-a-boost-from-llms.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link