Facilitating malware development: Artificial intelligence can also be used to generate more sophisticated or at least less labour-intensive malware.For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI.”The loader’s detailed line-by-line description suggesting it was crafted using generative AI,” according to the latest edition of HP Wolf Security’s Threat Insights Report.In addition, the “design of the HTML webpage delivering XWorm is almost visually identical as the output from ChatGPT 4o after prompting the LLM to generate an HTML page that offers a file download,” HP Wolf Security adds.Similar techniques were in play with the earlier AsyncRAT campaign, according to HP’s enterprise security division.Elsewhere, ransomware group FunkSec, an Algeria-linked ransomware-as-a-service (RaaS) operator that takes advantage of double-extortion tactics, has begun harnessing AI technologies, according to Check Point Research.”FunkSec operators appear to use AI-assisted malware development, which can enable even inexperienced actors to quickly produce and refine advanced tools,” Check Point researchers wrote in a blog post.
Accelerating vulnerability hunting and exploits: The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies.”Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says.Gen AI may be behind a 62% reduction in the time between a vulnerability being discovered and its exploitation by attackers from 47 days to just 18 days, according to a recent study by threat intelligence firm ReliaQuest.”This sharp decrease strongly indicates that a major technological advancement, likely GenAI, is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes.Adversaries are leveraging gen AI alongside pen-testing tools to write scripts for tasks such as network scanning, privilege escalation, and payload customization. AI is also likely being used by cybercriminals to analyze scan results and suggest optimal exploits, effectively allowing them to identify flaws in victim systems faster.”These advances accelerate many phases in the kill chain, particularly initial access,” ReliaQuest concludes.CSO’s Lucian Constantin offers a deeper look at how generative AI tools are transforming the cyber threat landscape by democratizing vulnerability hunting for for pen-testers and attackers alike.
Escalating threats with alternative platforms: Cybercriminals are rapidly shifting from ChatGPT to new AI models from China, DeepSeek and Qwen, to generate malicious content.”Threat actors are openly sharing techniques to jailbreak these models, bypass security controls, and create malware, info-stealers, and spam campaigns with minimal restrictions,” according to Check Point Research. “Some are even discussing how to use these AI tools to evade banking anti-fraud protections, a significant escalation in cyber threats.””Multiple discussions and shared techniques on using DeepSeek to bypass banking system anti-fraud protections have been found, indicating the potential for significant financial theft,” Check Point warns in a technical blog post.China-based AI company DeepSeek, whose recent entry has sent shockwaves through the industry, is weakly protected against abuse compared to its Western counterparts.Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones, individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.”Cybercriminals have also begun developing their own large language models (LLMs), such as WormGPT, FraudGPT, DarkBERT, and others, built without the guardrails that constrain criminals’ misuse of mainstream gen AI platforms.These platforms are commonly harnessed for applications such as phishing and malware generation.Moreover, mainstream LLMs can also be customized for targeted use. Security researcher Chris Kubecka recently shared with CSO how her custom version of ChatGPT, called Zero Day GPT, helped her identify more than 20 zero-days in a matter of months.
Breaking in with authentication bypass: Gen AI tools can also be abused to bypass security defences such as CAPTCHAs or biometric authentication.”AI can defeat CAPTCHA systems and analyze voice biometrics to compromise authentication,” according to cybersecurity vendor Dispersive. “This capability underscores the need for organizations to adopt more advanced, layered security measures.”
Countermeasures: Collectively the misuse of GenAI tools is making it easier for less skilled cybercriminals to earn a dishonest living. Defending against the attack vector challenges security professionals to harness the power of artificial intelligence more effectively than attackers.”Criminal misuse of AI technologies is driving the necessity to test, detect, and respond to these threats, in which AI is also being leveraged to combat cybercriminal activity,” Mindgard’s Garraghan says.In a blog post, Lawrence Pingree, VP of technical marketing at Dispersive, outlines preemptive cyber defenses that security professionals can take to win what he describes as an “AI ARMS (Automation, Reconnaissance, and Misinformation) race” between attackers and defenders.”Relying on traditional detection and response mechanisms is no longer sufficient,” Pingree warns.Alongside employee education and awareness programs, enterprises should be using AI to detect and neutralize generative AI-based threats in real-time. Randomization and preemptive changes to IP addresses, system configurations, and so on, can act as an obstacle to attack.Leveraging AI to simulate potential attack scenarios and predict adversary behavior through threat simulation and predictive intelligence also offers increased resilience against potential attacks.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/3819176/top-5-ways-attackers-use-generative-ai-to-exploit-your-systems.html