Tag: LLM
-
Invisible C2″Š”, “Šthanks to AI-powered techniques
by
in SecurityNews
Tags: ai, api, attack, breach, business, chatgpt, cloud, communications, control, cyberattack, cybersecurity, data, defense, detection, dns, edr, email, encryption, endpoint, hacker, iot, LLM, malicious, malware, ml, monitoring, network, office, openai, powershell, service, siem, soc, strategy, threat, tool, update, vulnerability, zero-trustInvisible C2″Š”, “Šthanks to AI-powered techniques Just about every cyberattack needs a Command and Control (C2) channel”Š”, “Ša way for attackers to send instructions to compromised systems and receive stolen data. This gives us all a chance to see attacks that are putting us at risk. LLMs can help attackers avoid signature based detection Traditionally, C2…
-
Generative AI red teaming: Tips and techniques for putting LLMs to the test
by
in SecurityNewsDefining objectives and scopeAssembling a teamThreat modelingAddressing the entire application stackDebriefing, post-engagement analysis, and continuous improvementGenerative AI red teaming complements traditional red teaming by focusing on the nuanced and complex aspects of AI-driven systems including accounting for new testing dimensions such as AI-specific threat modeling, model reconnaissance, prompt injection, guardrail bypass, and more. AI red-teaming…
-
The state of ransomware: Fragmented but still potent despite takedowns
by
in SecurityNews
Tags: ai, alphv, antivirus, attack, backup, cloud, control, cyber, cybercrime, cybersecurity, data, ddos, detection, endpoint, extortion, firewall, group, incident response, intelligence, law, leak, LLM, lockbit, malware, network, ransom, ransomware, service, software, tactics, threat, tool, usa, zero-trustRunners and riders on the rise: Smaller, more agile ransomware groups like Lynx (INC rebrand), RansomHub (a LockBit sub-group), and Akira filled the void after major takedowns, collectively accounting for 54% of observed attacks, according to a study by managed detection and response firm Huntress.RansomHub RaaS has quickly risen in prominence by absorbing displaced operators…
-
The Invisible Battlefield Behind LLM Security Crisis
by
in SecurityNewsOverview In recent years, with the wide application of open-source LLMs such as DeepSeek and Ollama, global enterprises are accelerating the private deployment of LLMs. This wave not only improves the efficiency of enterprises, but also increases the risk of data security leakage. According to NSFOCUS Xingyun Lab, from January to February 2025 alone, five…The…
-
Researchers Jailbreak 17 Popular LLM Models to Reveal Sensitive Data
by
in SecurityNewsIn a recent study published by Palo Alto Networks’ Threat Research Center, researchers successfully jailbroke 17 popular generative AI (GenAI) web products, exposing vulnerabilities in their safety measures. The investigation aimed to assess the effectiveness of jailbreaking techniques in bypassing the guardrails of large language models (LLMs), which are designed to prevent the generation of…
-
JFrog Integration mit NVIDIA NIM Microservices beschleunigt GenAI-Bereitstellung
by
in SecurityNewsDie neue Integration beschleunigt die Bereitstellung von GenAI- und LLM-Modellen und erhöht Transparenz, Rückverfolgbarkeit und Vertrauen. Performance und Sicherheit sind entscheidend für erfolgreiche KI-Bereitstellungen in Unternehmen. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-integration-mit-nvidia-nim-microservices-beschleunigt-genai-bereitstellung/a40069/
-
News alert: Hunters announces ‘Pathfinder AI’ to enhance detection and response in SOC workflows
Boston and Tel Aviv, Mar. 4, 2025, CyberNewswire, Hunters, the leader in next-generation SIEM, today announced Pathfinder AI, a major step toward a more AI-driven SOC. Building on Copilot AI, which is already transforming SOC workflows with LLM-powered… (more”¦) First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/news-alert-hunters-announces-pathfinder-ai-to-enhance-detection-and-response-in-soc-workflows/
-
it’s.BB e.V. lädt zu Online-Veranstaltung ein: Sicherheitsfragen im Kontext der LLM-Nutzung
by
in SecurityNews
Tags: LLMFirst seen on datensicherheit.de Jump to article: www.datensicherheit.de/its-bb-e-v-einladung-online-veranstaltung-sicherheitsfragen-kontext-llm-nutzung
-
Pathfinder AI Hunters Announces New AI Capabilities for Smarter SOC Automation
by
in SecurityNewsPathfinder AI expands Hunters’ vision for AI-driven SOCs, introducing Agentic AI for autonomous investigation and response. Hunters, the leader in next-generation SIEM, today announced Pathfinder AI, a major step toward a more AI-driven SOC. Building on Copilot AI, which is already transforming SOC workflows with LLM-powered investigation guidance, Hunters is introducing its Agentic AI vision,…
-
LLMjacking Hackers Abuse GenAI With AWS NHIs to Hijack Cloud LLMs
by
in SecurityNewsIn a concerning development, cybercriminals are increasingly targeting cloud-based generative AI (GenAI) services in a new attack vector dubbed >>LLMjacking.
-
LLMs Are Posing a Threat to Content Security
by
in SecurityNewsWith the wide application of large language models (LLM) in various fields, their potential risks and threats have gradually become prominent. “Content security” caused by inaccurate or misleading information is becoming a security concern that cannot be ignored. Unfairness and bias, adversarial attacks, malicious code generation, and exploitation of security vulnerabilities continue to raise risk…The…
-
Forscher entdecken LLM-Sicherheitsrisiko
Forscher haben Anmeldeinformationen in den Trainingsdaten von Large Language Models entdeckt.Beliebte LLMs wie DeepSeek werden mit Common Crawl trainiert, einem riesigen Datensatz mit Website-Informationen. Forscher von Truffle Security haben kürzlich einen Datensatz des Webarchives analysiert, der über 250 Milliarden Seiten umfasst und Daten von 47,5 Millionen Hosts enthält. Dabei stellten sie fest, dass rund 12.000…
-
12K hardcoded API keys and passwords found in public LLM training data
First seen on scworld.com Jump to article: www.scworld.com/news/12k-hardcoded-api-keys-and-passwords-found-in-public-llm-training-data
-
Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards
by
in SecurityNewsLLMjacking can cost organizations a lot of money: LLMjacking is a continuation of the cybercriminal practice of abusing stolen cloud account credentials for various illegal operations, such as cryptojacking, abusing hacked cloud computing resources to mine cryptocurrency. The difference is that large quantities of API calls to LLMs can quickly rack up huge costs, with…
-
ISMG Editors: Black Basta Falls, Is Ransomware on the Ropes?
by
in SecurityNewsAlso: U.S. Health Data Privacy Crackdowns, Reality vs. Hype of LLMs in Security. In this week’s update, four editors with ISMG explore the crumbling state of ransomware group Black Basta and implications for other cybercrime gangs, the expanding impact of U.S. health data privacy laws, and whether large language models are truly what they seem.…
-
12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training
by
in SecurityNewsA dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication.The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to…
-
Integration with Gloo Gateway – Impart Security
by
in SecurityNewsSecuring Web apps, APIs, & LLMs Just Got Easier: Impart’s Native Integration with Gloo Gateway APIs are the backbone of modern applications, but they’re also one of the biggest attack surfaces. As API threats evolve and Large Language Model (LLM) security becomes a pressing concern, organizations need fast, efficient, and easy-to-deploy solutions to protect their…
-
More Research Showing AI Breaking the Rules
by
in SecurityNewsThese researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating. Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models…
-
DEF CON 32 Decoding Galah, An LLM Powered Web Honeypot
by
in SecurityNewsAuthors/Presenters: Adel Karimi Our sincere appreciation to DEF CON, and the Authors/Presenters for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/02/def-con-32-decoding-galah-an-llm-powered-web-honeypot/
-
What is SIEM? Improving security posture through event log data
by
in SecurityNews
Tags: access, ai, api, automation, ciso, cloud, compliance, data, defense, detection, edr, endpoint, firewall, fortinet, gartner, google, guide, ibm, infrastructure, intelligence, kubernetes, LLM, microsoft, mitigation, mobile, monitoring, network, openai, regulation, risk, router, security-incident, service, siem, soar, soc, software, threat, toolAt its core, a SIEM is designed to parse and analyze various log files, including firewalls, servers, routers and so forth. This means that SIEMs can become the central “nerve center” of a security operations center, driving other monitoring functions to resolve the various daily alerts.Added to this data are various threat intelligence feeds that…
-
New LLM Vulnerability Exposes AI Models Like ChatGPT to Exploitation
by
in SecurityNewsA significant vulnerability has been identified in large language models (LLMs) such as ChatGPT, raising concerns over their susceptibility to adversarial attacks. Researchers have highlighted how these models can be manipulated through techniques like prompt injection, which exploit their text-generation capabilities to produce harmful outputs or compromise sensitive information. Prompt Injection: A Growing Cybersecurity Challenge…
-
DarkMind: A Novel Backdoor Attack Exploiting Customized LLMs’ Reasoning Capabilities
by
in SecurityNewsThe rise of customized large language models (LLMs) has revolutionized artificial intelligence applications, enabling businesses and individuals to leverage advanced reasoning capabilities for complex tasks. However, this rapid adoption has also exposed critical vulnerabilities. A groundbreaking study by Zhen Guo and Reza Tourani introduces DarkMind, a novel backdoor attack targeting the reasoning processes of customized…
-
CISOs Brace for LLM-Powered Attacks: Key Strategies to Stay Ahead
For chief information security officers (CISOs), understanding and mitigating the security risks associated with LLMs is paramount. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/02/cisos-brace-for-llm-powered-attacks-key-strategies-to-stay-ahead/
-
Top 5 ways attackers use generative AI to exploit your systems
by
in SecurityNews
Tags: access, ai, attack, authentication, awareness, banking, captcha, chatgpt, china, control, cyber, cybercrime, cybersecurity, defense, detection, exploit, extortion, finance, flaw, fraud, group, hacker, intelligence, LLM, malicious, malware, network, phishing, ransomware, resilience, service, spam, tactics, theft, threat, tool, vulnerability, zero-dayFacilitating malware development: Artificial intelligence can also be used to generate more sophisticated or at least less labour-intensive malware.For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI.”The loader’s…
-
Datenleck durch GenAI-Nutzung
by
in SecurityNews
Tags: ai, chatgpt, ciso, compliance, data-breach, gartner, LLM, risk, strategy, tool, training, vulnerabilityViele Mitarbeiter teilen sensible Unternehmensdaten, wenn sie generative KI-Apps anwenden.Laut einem aktuellen Bericht über Gen-AI-Datenlecks von Harmonic enthielten 8,5 Prozent der Mitarbeiteranfragen an beliebte LLMs sensible Daten, was zu Sicherheits-, Compliance-, Datenschutz- und rechtlichen Bedenken führte.Der Security-Spezialist hat im vierten Quartal 2024 Zehntausende von Eingabeaufforderungen an ChatGPT, Copilot, Gemini, Claude und Perplexity analysiert. Dabei stellte…