Tag: LLM
-
AI programming copilots are worsening code security and leaking more secrets
by
in SecurityNews
Tags: access, ai, api, application-security, attack, authentication, best-practice, breach, ceo, ciso, container, control, credentials, cybersecurity, data, data-breach, github, government, incident response, injection, least-privilege, LLM, monitoring, open-source, openai, password, programming, risk, skills, software, strategy, tool, training, vulnerabilityOverlooked security controls: Ellen Benaim, CISO at enterprise content mangement firm Templafy, said AI coding assistants often fail to adhere to the robust secret management practices typically observed in traditional systems.”For example, they may insert sensitive information in plain text within source code or configuration files,” Benaim said. “Furthermore, because large portions of code are…
-
Are LLM firewalls the future of AI security?
by
in SecurityNewsAs large language models permeate industries, experts at Black Hat Asia 2025 debate the need for LLM firewalls and explore their role in fending off emerging AI threats First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366621934/Are-LLM-firewalls-the-future-of-AI-security
-
Hackers Use DeepSeek and Remote Desktop Apps to Deploy TookPS Malware
by
in SecurityNewsA recent investigation by cybersecurity researchers has uncovered a large-scale malware campaign leveraging the DeepSeek LLM and popular remote desktop applications to distribute the Trojan-Downloader.Win32.TookPS malware. The attackers targeted both individual users and organizations by disguising malicious software as legitimate business tools, including UltraViewer, AutoCAD, and SketchUp. Malicious Infrastructure and Infection Chain The TookPS malware…
-
LLMs are now available in snack size but digest with care
by
in SecurityNewsPassed down wisdom can distort reality: Rather than developing their own contextual understanding, student models rely heavily on their teacher models’ pre-learned conclusions. Whether this limitation can lead to model hallucination is highly debated by experts.Brauchler is of the opinion that the efficiency of the student models is tied to that of their teachers, irrespective…
-
LLM providers on the cusp of an ‘extinction’ phase as capex realities bite
by
in SecurityNews
Tags: LLMOnly the strong will survive, but analyst says cull will not be as rapid as during dotcom era First seen on theregister.com Jump to article: www.theregister.com/2025/03/31/llm_providers_extinction/
-
Gemini hackers can deliver more potent attacks with a helping hand from”¦ Gemini
by
in SecurityNewsHacking LLMs has always been more art than science. A new attack on Gemini could change that. First seen on arstechnica.com Jump to article: arstechnica.com/security/2025/03/gemini-hackers-can-deliver-more-potent-attacks-with-a-helping-hand-from-gemini/
-
Nir Zuk: Google’s Multi-Cloud Security Strategy Won’t Work
Palo Alto Networks CTO Nir Zuk predicts Google’s security push through its $32 billion buy of Wiz won’t succeed, as customers are reluctant to buy multi-cloud tools from cloud vendors. Zuk details how adversaries use LLMs at scale and how Palo Alto is unifying SOC tools under its Cortex platform. First seen on govinfosecurity.com Jump…
-
Rising attack exposure, threat sophistication spur interest in detection engineering
by
in SecurityNews
Tags: access, ai, attack, automation, banking, ceo, ciso, cloud, compliance, cyber, cybersecurity, data, detection, endpoint, exploit, finance, framework, healthcare, infrastructure, insurance, intelligence, LLM, malware, mitre, network, programming, ransomware, RedTeam, risk, sans, siem, software, supply-chain, tactics, technology, threat, tool, update, vulnerability, zero-dayMore than the usual threat detection practices: Proponents argue that detection engineering differs from traditional threat detection practices in approach, methodology, and integration with the development lifecycle. Threat detection processes are typically more reactive and rely on pre-built rules and signatures from vendors that offer limited customization for the organizations using them. In contrast, detection…
-
Malicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by 52%
by
in SecurityNewsThe cybersecurity landscape in 2024 witnessed a significant escalation in AI-related threats, with malicious actors increasingly targeting and exploiting large language models (LLMs). According to KELA’s annual >>State of Cybercrime
-
ARACNE: LLM-Powered Pentesting Agent Executes Commands on Real Linux Shell Systems
by
in SecurityNewsResearchers have introduced ARACNE, a fully autonomous Large Language Model (LLM)-based pentesting agent designed to interact with SSH services on real Linux shell systems. ARACNE is engineered to execute commands autonomously, marking a significant advancement in the automation of cybersecurity testing. The agent’s architecture supports multiple LLM models, enhancing its flexibility and effectiveness in penetration…
-
Lasso Adds Automated Red Teaming Capability to Test LLMs
by
in SecurityNewsLasso today added an ability to autonomously simulate real-world cyberattacks against large language models (LLMs) to enable organizations to improve the security of artificial intelligence (AI) applications. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/lasso-adds-automated-red-teaming-capability-to-test-llms/
-
Tencent Says It Does More in AI With Fewer GPUs
by
in SecurityNewsNot Every New Generation of LLM Needs Exponentially More Chips, Says Tencent Exec. Chinese tech giant Tencent reported a slowdown in GPU deployment, attributing it to a prioritization among Sino tech companies of chip efficiency over raw numbers, a strategy made clear internationally by artificial intelligence firm DeepSeek. First seen on govinfosecurity.com Jump to article:…
-
Cato Uses LLM-Developed Fictional World to Create Jailbreak Technique
A Cato Networks threat researcher with little coding experience was able to convince AI LLMs from DeepSeek, OpenAI, and Microsoft to bypass security guardrails and develop malware that could steal browser passwords from Google Chrome. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/cato-uses-llm-developed-fictional-world-to-create-jailbreak-technique/
-
New Jailbreak Technique Uses Fictional World to Manipulate AI
by
in SecurityNewsCato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls. The post New Jailbreak Technique Uses Fictional World to Manipulate AI appeared first on SecurityWeek. First seen on securityweek.com Jump to article: www.securityweek.com/new-jailbreak-technique-uses-fictional-world-to-manipulate-ai/
-
Prompt Injection Attacks in LLMs: Mitigating Risks with Microsegmentation
Prompt injection attacks have emerged as a critical concern in the realm of Large Language Model (LLM) application security. These attacks exploit the way LLMs process and respond to user inputs, posing unique challenges for developers and security professionals. Let’s dive into what makes these attacks so distinctive, how they work, and what steps can……
-
Kritische Sicherheitsherausforderungen für LLMs und generative KI
by
in SecurityNewsDie Einführung von Large-Language-Models (LLMs) und generativer KI revolutioniert die Unternehmensabläufe und sorgt für unübertroffene Innovation, Effizienz und Wettbewerbsvorteile. Diese schnelle Integration bringt jedoch erhebliche Herausforderungen für die KI-Sicherheit mit sich, denen sich Unternehmen stellen müssen. Erkenntnisse von Qualys zeigen, dass über 1.255 Organisationen KI/ML-Software auf 2,8 Millionen Assets eingesetzt haben, wobei 6,2 % etwa […]…
-
Immersive World: LLM-Jailbreak-Technik für Zero-Knowledge-Hacker
by
in SecurityNewsDie LLM-Jailbreak-Technik “Immersive World” zeigt einmal mehr, wie KI-Modelle durch kreative Manipulation ausgetrickst werden können. First seen on tarnkappe.info Jump to article: tarnkappe.info/artikel/jailbreaks/immersive-world-llm-jailbreak-technik-fuer-zero-knowledge-hacker-312082.html
-
AI crawlers haven’t learned to play nice with websites
by
in SecurityNewsSourceHut says it’s getting DDoSed by LLM bots First seen on theregister.com Jump to article: www.theregister.com/2025/03/18/ai_crawlers_sourcehut/
-
Security Researcher Proves GenAI Tools Can Develop Google Chrome Infostealers
A Cato Networks researcher discovered a new LLM jailbreaking technique enabling the creation of password-stealing malware First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/security-researcher-llm/
-
Prompt Security Adds Ability to Restrict Access to Data Generated by LLMs
by
in SecurityNewsPrompt Security today extended its platform to enable organizations to implement policies that restrict the types of data surfaced by a large language model (LLM) that employees are allowed to access. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/prompt-security-adds-ability-to-restrict-access-to-data-generated-by-llms/
-
AI development pipeline attacks expand CISOs’ software supply chain risk
by
in SecurityNews
Tags: access, ai, api, application-security, attack, backdoor, breach, business, ciso, cloud, container, control, cyber, cybersecurity, data, data-breach, detection, encryption, exploit, flaw, fortinet, government, infrastructure, injection, intelligence, LLM, malicious, malware, ml, network, open-source, password, penetration-testing, programming, pypi, risk, risk-assessment, russia, saas, sbom, service, software, supply-chain, threat, tool, training, vpn, vulnerabilitydevelopment pipelines are exacerbating software supply chain security problems.Incidents of exposed development secrets via publicly accessible, open-source packages rose 12% last year compared to 2023, according to ReversingLabs (RL).A scan of 30 of the most popular open-source packages found an average of six critical-severity and 33 high-severity flaws per package.Commercial software packages are also a…
-
eSentire Labs Open Sources Project to Monitor LLMs
by
in SecurityNewsThe eSentire LLM Gateway provides monitoring and governance of ChatGPT and other large language models being used in the organization. First seen on darkreading.com Jump to article: www.darkreading.com/cybersecurity-analytics/esentire-labs-open-sources-project-to-monitor-llms
-
Invisible C2″Š”, “Šthanks to AI-powered techniques
by
in SecurityNews
Tags: ai, api, attack, breach, business, chatgpt, cloud, communications, control, cyberattack, cybersecurity, data, defense, detection, dns, edr, email, encryption, endpoint, hacker, iot, LLM, malicious, malware, ml, monitoring, network, office, openai, powershell, service, siem, soc, strategy, threat, tool, update, vulnerability, zero-trustInvisible C2″Š”, “Šthanks to AI-powered techniques Just about every cyberattack needs a Command and Control (C2) channel”Š”, “Ša way for attackers to send instructions to compromised systems and receive stolen data. This gives us all a chance to see attacks that are putting us at risk. LLMs can help attackers avoid signature based detection Traditionally, C2…
-
Generative AI red teaming: Tips and techniques for putting LLMs to the test
by
in SecurityNewsDefining objectives and scopeAssembling a teamThreat modelingAddressing the entire application stackDebriefing, post-engagement analysis, and continuous improvementGenerative AI red teaming complements traditional red teaming by focusing on the nuanced and complex aspects of AI-driven systems including accounting for new testing dimensions such as AI-specific threat modeling, model reconnaissance, prompt injection, guardrail bypass, and more. AI red-teaming…
-
The state of ransomware: Fragmented but still potent despite takedowns
by
in SecurityNews
Tags: ai, alphv, antivirus, attack, backup, cloud, control, cyber, cybercrime, cybersecurity, data, ddos, detection, endpoint, extortion, firewall, group, incident response, intelligence, law, leak, LLM, lockbit, malware, network, ransom, ransomware, service, software, tactics, threat, tool, usa, zero-trustRunners and riders on the rise: Smaller, more agile ransomware groups like Lynx (INC rebrand), RansomHub (a LockBit sub-group), and Akira filled the void after major takedowns, collectively accounting for 54% of observed attacks, according to a study by managed detection and response firm Huntress.RansomHub RaaS has quickly risen in prominence by absorbing displaced operators…
-
The Invisible Battlefield Behind LLM Security Crisis
by
in SecurityNewsOverview In recent years, with the wide application of open-source LLMs such as DeepSeek and Ollama, global enterprises are accelerating the private deployment of LLMs. This wave not only improves the efficiency of enterprises, but also increases the risk of data security leakage. According to NSFOCUS Xingyun Lab, from January to February 2025 alone, five…The…
-
Researchers Jailbreak 17 Popular LLM Models to Reveal Sensitive Data
by
in SecurityNewsIn a recent study published by Palo Alto Networks’ Threat Research Center, researchers successfully jailbroke 17 popular generative AI (GenAI) web products, exposing vulnerabilities in their safety measures. The investigation aimed to assess the effectiveness of jailbreaking techniques in bypassing the guardrails of large language models (LLMs), which are designed to prevent the generation of…
-
JFrog Integration mit NVIDIA NIM Microservices beschleunigt GenAI-Bereitstellung
by
in SecurityNewsDie neue Integration beschleunigt die Bereitstellung von GenAI- und LLM-Modellen und erhöht Transparenz, Rückverfolgbarkeit und Vertrauen. Performance und Sicherheit sind entscheidend für erfolgreiche KI-Bereitstellungen in Unternehmen. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-integration-mit-nvidia-nim-microservices-beschleunigt-genai-bereitstellung/a40069/