Tag: LLM
-
DEF CON 32 Decoding Galah, An LLM Powered Web Honeypot
by
in SecurityNewsAuthors/Presenters: Adel Karimi Our sincere appreciation to DEF CON, and the Authors/Presenters for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/02/def-con-32-decoding-galah-an-llm-powered-web-honeypot/
-
What is SIEM? Improving security posture through event log data
by
in SecurityNews
Tags: access, ai, api, automation, ciso, cloud, compliance, data, defense, detection, edr, endpoint, firewall, fortinet, gartner, google, guide, ibm, infrastructure, intelligence, kubernetes, LLM, microsoft, mitigation, mobile, monitoring, network, openai, regulation, risk, router, security-incident, service, siem, soar, soc, software, threat, toolAt its core, a SIEM is designed to parse and analyze various log files, including firewalls, servers, routers and so forth. This means that SIEMs can become the central “nerve center” of a security operations center, driving other monitoring functions to resolve the various daily alerts.Added to this data are various threat intelligence feeds that…
-
New LLM Vulnerability Exposes AI Models Like ChatGPT to Exploitation
by
in SecurityNewsA significant vulnerability has been identified in large language models (LLMs) such as ChatGPT, raising concerns over their susceptibility to adversarial attacks. Researchers have highlighted how these models can be manipulated through techniques like prompt injection, which exploit their text-generation capabilities to produce harmful outputs or compromise sensitive information. Prompt Injection: A Growing Cybersecurity Challenge…
-
DarkMind: A Novel Backdoor Attack Exploiting Customized LLMs’ Reasoning Capabilities
by
in SecurityNewsThe rise of customized large language models (LLMs) has revolutionized artificial intelligence applications, enabling businesses and individuals to leverage advanced reasoning capabilities for complex tasks. However, this rapid adoption has also exposed critical vulnerabilities. A groundbreaking study by Zhen Guo and Reza Tourani introduces DarkMind, a novel backdoor attack targeting the reasoning processes of customized…
-
CISOs Brace for LLM-Powered Attacks: Key Strategies to Stay Ahead
For chief information security officers (CISOs), understanding and mitigating the security risks associated with LLMs is paramount. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/02/cisos-brace-for-llm-powered-attacks-key-strategies-to-stay-ahead/
-
Top 5 ways attackers use generative AI to exploit your systems
by
in SecurityNews
Tags: access, ai, attack, authentication, awareness, banking, captcha, chatgpt, china, control, cyber, cybercrime, cybersecurity, defense, detection, exploit, extortion, finance, flaw, fraud, group, hacker, intelligence, LLM, malicious, malware, network, phishing, ransomware, resilience, service, spam, tactics, theft, threat, tool, vulnerability, zero-dayFacilitating malware development: Artificial intelligence can also be used to generate more sophisticated or at least less labour-intensive malware.For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI.”The loader’s…
-
Datenleck durch GenAI-Nutzung
by
in SecurityNews
Tags: ai, chatgpt, ciso, compliance, data-breach, gartner, LLM, risk, strategy, tool, training, vulnerabilityViele Mitarbeiter teilen sensible Unternehmensdaten, wenn sie generative KI-Apps anwenden.Laut einem aktuellen Bericht über Gen-AI-Datenlecks von Harmonic enthielten 8,5 Prozent der Mitarbeiteranfragen an beliebte LLMs sensible Daten, was zu Sicherheits-, Compliance-, Datenschutz- und rechtlichen Bedenken führte.Der Security-Spezialist hat im vierten Quartal 2024 Zehntausende von Eingabeaufforderungen an ChatGPT, Copilot, Gemini, Claude und Perplexity analysiert. Dabei stellte…
-
LLMJacking: Sysdig entdeckt neue Angriffe auf DeepSeek
by
in SecurityNewsMit der steigenden Nachfrage nach leistungsfähigen LLMs wächst auch der Missbrauch durch LLMjacking. Schwarzmarktplätze für gestohlene API-Zugänge florieren, und Untergrund-Anbieter passen ihre Dienste kontinuierlich an. Angreifer haben ihre Techniken verfeinert und implementieren neue Modelle wie DeepSeek in kürzester Zeit. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/llmjacking-sysdig-entdeckt-neue-angriffe-auf-deepseek/a39732/
-
Could you Spot a Digital Twin at Work? Get Ready for Hyper-Personalized Attacks
The world is worried about deepfakes. Research conducted in the U.S. and Australia finds that nearly three-quarters of respondents feel negatively about them, associating the AI-generated phenomenon with fraud and misinformation. But in the workplace, we’re more likely to let our guard down. That’s bad news for businesses as the prospect of LLM-trained malicious digital..…
-
LLMjacking-Angriffe auf KI-Modelle wie DeepSeek
by
in SecurityNewsBenutzt ihr KI-Modelle wie DeepSeek? Dann findet jetzt heraus, wie LLMjacking-Angriffe diese LLMs und eure Daten bedrohen. First seen on tarnkappe.info Jump to article: tarnkappe.info/artikel/kuenstliche-intelligenz/llmjacking-angriffe-auf-ki-modelle-wie-deepseek-309920.html
-
Hackers Monetize LLMjacking, Selling Stolen AI Access for $30 per Month
LLMjacking attacks target DeepSeek, racking up huge cloud costs. Sysdig reveals a black market for LLM access has… First seen on hackread.com Jump to article: hackread.com/hackers-monetize-llmjacking-selling-stolen-ai-access/
-
Autonomous LLMs Reshaping Pen Testing: Real-World AD Breaches and the Future of Cybersecurity
by
in SecurityNewsLarge Language Models (LLMs) are transforming penetration testing (pen testing), leveraging their advanced reasoning and automation capabilities to simulate sophisticated cyberattacks. Recent research demonstrates how autonomous LLM-driven systems can effectively perform assumed breach simulations in enterprise environments, particularly targeting Microsoft Active Directory (AD) networks. These advancements mark a significant departure from traditional pen testing methods,…
-
DeepSeek-R1 LLM Fails Over Half of Jailbreak Attacks in Security Analysis
by
in SecurityNewsDeepSeek-R1 LLM fails 58% of jailbreak attacks in Qualys security analysis. Learn about the vulnerabilities, compliance concerns, and risks for enterprise adoption. First seen on hackread.com Jump to article: hackread.com/deepseek-r1-llm-fail-jailbreak-attack-security-analysis/
-
Cybercriminals Eye DeepSeek, Alibaba LLMs for Malware Development
by
in SecurityNewsCheck Point has observed cybercriminals toy with Alibaba’s Qwen LLM to develop infostealers First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/deepseek-alibaba-llms-malware/
-
Anomalies are not Enough
by
in SecurityNews
Tags: ai, attack, ciso, communications, country, cybersecurity, data, data-breach, defense, email, government, LLM, mail, marketplace, mitre, ml, network, resilience, risk, service, siem, threat, toolMitre Att&ck as Context Introduction: A common theme of science fiction authors, and these days policymakers and think tanks, is how will the humans work with the machines, as the machines begin to surpass us across many dimensions. In cybersecurity humans and their systems are at a crossroads, their limitations daily exposed by ever more innovative,…
-
Hackers impersonate DeepSeek to distribute malware
by
in SecurityNews
Tags: access, ai, api, attack, automation, breach, china, cloud, computer, credentials, cyberattack, data, hacker, infrastructure, leak, LLM, malicious, malware, ml, pypi, threat, tool, vulnerabilityTo make things worse than they already are for DeepSeek, hackers are found flooding the Python Package Index (PyPI) repository with fake DeepSeek packages carrying malicious payloads.According to a discovery made by Positive Expert Security Center (PT ESC), a campaign was seen using this trick to dupe unsuspecting developers, ML engineers, and AI enthusiasts looking…
-
Hacker nutzen Google Gemini zur Verstärkung von Angriffen
by
in SecurityNews
Tags: access, ai, apt, chatgpt, ciso, cyber, cyberattack, ddos, framework, google, governance, government, group, hacker, intelligence, LLM, microsoft, military, north-korea, openai, phishing, threat, tool, vulnerability, zero-day -
DeepSeek’s R1 curiously tells El Reg reader: ‘My guidelines are set by OpenAI’
by
in SecurityNewsDespite impressive benchmarks, the Chinese-made LLM is not without some interesting issues First seen on theregister.com Jump to article: www.theregister.com/2025/01/27/deepseek_r1_identity/
-
Nation-State Hackers Abuse Gemini AI Tool
Google highlighted significant abuse of its Gemini LLM tool by nation state actors to support malicious activities, including research and malware development First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/nation-state-abuse-gemini-ai/
-
Neue Ransomware-Gruppe Funksec profitiert von LLMs
by
in SecurityNews
Tags: access, ai, cyberattack, data-breach, ddos, extortion, group, leak, LLM, mail, malware, powershell, ransomware, rust, service, tool, usa, windows -
New ransomware group Funksec is quickly gaining traction
by
in SecurityNews
Tags: access, ai, attack, computer, control, country, cybercrime, data, data-breach, ddos, detection, email, encryption, extortion, government, group, leak, LLM, malware, password, powershell, ransom, ransomware, russia, rust, service, threat, tool, usa, windowsThreat reports for December showed a newcomer to the ransomware-as-a-service (RaaS) landscape quickly climbing the ranks. Called Funksec, this group appears to be leveraging generative AI in its malware development and its founders are tied to hacktivist activity.Funksec was responsible for 103 out of 578 ransomware attacks tracked by security firm NCC Group in December,…
-
A pickle in Meta’s LLM code could allow RCE attacks
by
in SecurityNews
Tags: ai, attack, breach, cve, cvss, data, data-breach, exploit, flaw, framework, github, LLM, malicious, ml, network, open-source, rce, remote-code-execution, software, supply-chain, technology, theft, vulnerabilityMeta’s large language model (LLM) framework, Llama, suffers a typical open-source coding oversight, potentially allowing arbitrary code execution on servers leading to resource theft, data breaches, and AI model takeover.The flaw, tracked as CVE-2024-50050, is a critical deserialization bug belonging to a class of vulnerabilities arising from the improper use of the open-source library (pyzmq)…
-
The Security Risk of Rampant Shadow AI
by
in SecurityNewsWhile employees want to take advantage of the increased efficiency of GenAI and LLMs, CISOs and IT teams must be diligent and stay on top of the most up-to-date security regulations. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/security-risk-rampant-shadow-ai
-
Sweet Security launches LLM-powered cloud detection engine
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/brief/sweet-security-launches-llm-powered-cloud-detection-engine
-
How organizations can secure their AI code
by
in SecurityNews
Tags: ai, application-security, awareness, backdoor, breach, business, chatgpt, ciso, compliance, control, credentials, crime, cybersecurity, data, data-breach, finance, github, healthcare, LLM, malicious, ml, open-source, organized, programming, risk, risk-management, software, startup, strategy, supply-chain, technology, tool, training, vulnerabilityIn 2023, the team at data extraction startup Reworkd was under tight deadlines. Investors pressured them to monetize the platform, and they needed to migrate everything from Next.js to Python/FastAPI. To speed things up, the team decided to turn to ChatGPT to do some of the work. The AI-generated code appeared to function, so they…