Generative AI is being embedded into security tools at a furious pace as CISOs adopt the technology internally to automate manual processes and improve productivity. But research also suggests this surge in gen AI adoption comes with a fair amount of trepidation among cybersecurity professionals, which CISOs must keep in mind when weaving gen AI into their security operations.Early adopting organizations have already incorporated gen AI to automate and accelerate security workflows and improve incident response. Security vendors are also increasingly introducing gen AI-powered tools that boost productivity for security analysts and offer a multitude of practical applications.The usefulness of gen AI tools is evidenced by their proliferation in intrusion detection, anomaly detection, malware identification, and anti-fraud systems, says Peter Garraghan, CEO and CTO of AI security red-teaming firm Mindgard and a professor at Lancaster University.”AI is rather good at detecting and summarizing patterns from loosely related data, as well as automating laborious activities,” Garraghan tells CSO. “Early AI adopters have created their own tooling for log management, and increasingly all vendors are leveraging some form of AI functionality to support this.”The presence of AI in security tools is far from new: natural language processing (NLP) for text classification has been used in the labelling of security logs for more than a decade and anti-malware tools have used machine learning for nearly as long.But the advent of gen AI has been something of a game changer for vendors and CISOs in turbocharging the adoption of what’s come to be known as AI-powered security. Research firm IDC highlights numerous cybersecurity use cases for gen AI technologies, including alert correlation in security operations centers (SOCs), writing detection rules, updating security rules and policies, compliance, and much more.But IDC also stresses that enterprises still need to proceed with caution when deploying gen AI-powered cybersecurity tools.”There are practical problems with compliance, data exposure, gen AI hallucinating based on poorly scripted domains or incomplete data sets, and how people (yes, people) interact with automated analytics,” IDC warns.Gen AI may contribute to the realization of fully automated security operations, but it is not without its potential pitfalls, IDC cautions. “While IDC believes that (in general) the more security automation, the better, it should be understood that businesses should be careful of full automation response because applications could break.”Gen AI’s strength is in its ability to speed up report writing and incident reporting “but sometimes complementary technologies or human oversight are necessary to drive value,” says Christian Have, CTO at security vendor Logpoint. “Not every problem is a language-based one. Sometimes we need to find the solution in math or human intuition. A good example here is threat prioritization.”
Prominent gen AI use cases
A survey carried out by industry training and certification organization ISC2 as part of the latest edition of its annual cybersecurity workforce study shows that gen AI is already in use for a wide array of cybersecurity tasks today, with general agreement that it will “ultimately improve the workforce.”Top among these use cases, cited by 56% of respondents, is employing gen AI to augment common operational tasks, such as automating administrative processes, accelerating case management, and translating natural language into policy.Cybersecurity pros are also using gen AI to speed up report writing and incident reporting (49%) and to simplify threat intelligence (47%), for example, by building and refining threat actor profiles and then identifying steps to simulate real-world attack scenarios.According to the survey, other SecOps use cases include: accelerating threat hunting, by writing and running queries, and by extracting indicators of compromise via script from PDF (43%); improving policy simulations to predict how a change in policy will impact the environment (41%); and improving privacy risk assessment by identifying privacy issues emerging from the processing of certain data (39%).
Insufficient guidelines and training
However, the ISC2 study also found that the most significant causes for concern around the use of gen AI stemmed not from the technology itself, but from issues surrounding its implementation in an enterprise: 54% of respondents said they have already faced data privacy and security concerns due to organizational adoption of gen AI and 67% agreed that gen AI will pose a significant future threat for cybersecurity.That fear is grounded in a feeling that organizations have not provided sufficient training or do not have policies and procedures in place to mitigate the risks associated with gen AI. “Lack of a clear gen AI strategy was cited as one of the top barriers to its organizational adoption by nearly half (45%) of all participants,” ISC2 stated.”This lack of a clear strategy can pose challenges for organizations in effectively harnessing the potential benefits of gen AI while mitigating associated risks, making it crucial for organizations to develop a well-defined and comprehensive strategy to guide the integration and usage of gen AI in cybersecurity practices,” the study said.While almost 90% of professionals said their organization has a gen AI use policy, 65% of professionals responded that their organization needs to implement more regulations on the safe use of the technology.In North America, only 31% of respondents said they had received sufficient training around AI from their organization, although in other parts of the world, those numbers were significantly higher: Middle East and Africa 73%, Latin America 72%, Asia-Pacific 59%, and Europe 59%.
Tool rather than replacement
While security operations centers are a key venue for the introduction of generative AI technologies, security professionals should regard the technology as “another tool in the tool chest” and not a potential replacement for the SOC, says Rahul Tyagi, CEO of AI and quantum hardware and software developer SECQAI.”Given the significant impact that false negatives can have across automated incident response, the best security operations centers don’t wholly rely on gen AI tools,” Tyagi tells CSO. “It becomes an effective copilot for certain tasks (e.g., report generation) and can act as another user interface (supporting natural language interactions).”Gen AI technologies can also play a critical role in the improvement of security awareness at an organization, says Camden Woollven, group head of AI product marketing at governance, risk management, and compliance company GRC International Group.”Gen AI can create realistic phishing simulations and training scenarios tailored to specific departments or roles,” she says. “So instead of generic ‘Don’t click suspicious links’ training, you get targeted examples that actually reflect the kinds of threats your marketing team or finance department faces day-to-day.”Other applications include incident response. “We’re seeing gen AI help teams run playbooks more efficiently,” Woollven says. “Instead of an analyst manually working through a checklist during an incident, AI can suggest next steps based on the specific alerts and system data it’s seeing.”Gen AI deployments have also been deployed to improve compliance mapping, Woollven adds. “When you’re dealing with multiple frameworks, GDPR, HIPAA, etc., gen AI can help map controls across frameworks and spot gaps. This saves loads of time compared to manual mapping.”There’s also the defensive-testing angle as security teams use gen AI to generate attack scenarios, essentially stress-testing their systems by having AI think like an attacker. The approach is far from perfect, but it can “help organizations to identify blind spots your human red team might miss,” Woollven says.
Agentic AI on the horizon
Even as gen AI use cases expand at a rapid rate, some experts are already looking towards further developments in artificial intelligence technology, such as agentic AI, as the next stage of deploying AI against security threats.”Agentic AI takes the outcomes of generative AI, like alert data collection and synthesis, and puts them to work, autonomously managing and mitigating threats in real-time,” says Joe Partlow, CTO at threat intel firm ReliaQuest. “Its speed, accuracy, and efficiency lead to faster threat detection and response.”SecOps teams spend too much time on mundane activities such as monitoring alerts, conducting initial investigations, and performing routine tasks, work that can be minimized with further refinements in AI technology.”Agentic AI can take over these manual, time-intensive activities, reducing burnout and allowing human analysts to focus on more strategic and proactive activities, like threat hunting,” Partlow says.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/3619006/generative-ai-cybersecurity-use-cases-are-expanding-fast-but-experts-say-caution-is-warranted.html