Tag: LLM
-
Hackers impersonate DeepSeek to distribute malware
by
in SecurityNews
Tags: access, ai, api, attack, automation, breach, china, cloud, computer, credentials, cyberattack, data, hacker, infrastructure, leak, LLM, malicious, malware, ml, pypi, threat, tool, vulnerabilityTo make things worse than they already are for DeepSeek, hackers are found flooding the Python Package Index (PyPI) repository with fake DeepSeek packages carrying malicious payloads.According to a discovery made by Positive Expert Security Center (PT ESC), a campaign was seen using this trick to dupe unsuspecting developers, ML engineers, and AI enthusiasts looking…
-
Hacker nutzen Google Gemini zur Verstärkung von Angriffen
by
in SecurityNews
Tags: access, ai, apt, chatgpt, ciso, cyber, cyberattack, ddos, framework, google, governance, government, group, hacker, intelligence, LLM, microsoft, military, north-korea, openai, phishing, threat, tool, vulnerability, zero-day -
DeepSeek’s R1 curiously tells El Reg reader: ‘My guidelines are set by OpenAI’
by
in SecurityNewsDespite impressive benchmarks, the Chinese-made LLM is not without some interesting issues First seen on theregister.com Jump to article: www.theregister.com/2025/01/27/deepseek_r1_identity/
-
Nation-State Hackers Abuse Gemini AI Tool
Google highlighted significant abuse of its Gemini LLM tool by nation state actors to support malicious activities, including research and malware development First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/nation-state-abuse-gemini-ai/
-
Neue Ransomware-Gruppe Funksec profitiert von LLMs
by
in SecurityNews
Tags: access, ai, cyberattack, data-breach, ddos, extortion, group, leak, LLM, mail, malware, powershell, ransomware, rust, service, tool, usa, windows -
New ransomware group Funksec is quickly gaining traction
by
in SecurityNews
Tags: access, ai, attack, computer, control, country, cybercrime, data, data-breach, ddos, detection, email, encryption, extortion, government, group, leak, LLM, malware, password, powershell, ransom, ransomware, russia, rust, service, threat, tool, usa, windowsThreat reports for December showed a newcomer to the ransomware-as-a-service (RaaS) landscape quickly climbing the ranks. Called Funksec, this group appears to be leveraging generative AI in its malware development and its founders are tied to hacktivist activity.Funksec was responsible for 103 out of 578 ransomware attacks tracked by security firm NCC Group in December,…
-
A pickle in Meta’s LLM code could allow RCE attacks
by
in SecurityNews
Tags: ai, attack, breach, cve, cvss, data, data-breach, exploit, flaw, framework, github, LLM, malicious, ml, network, open-source, rce, remote-code-execution, software, supply-chain, technology, theft, vulnerabilityMeta’s large language model (LLM) framework, Llama, suffers a typical open-source coding oversight, potentially allowing arbitrary code execution on servers leading to resource theft, data breaches, and AI model takeover.The flaw, tracked as CVE-2024-50050, is a critical deserialization bug belonging to a class of vulnerabilities arising from the improper use of the open-source library (pyzmq)…
-
The Security Risk of Rampant Shadow AI
by
in SecurityNewsWhile employees want to take advantage of the increased efficiency of GenAI and LLMs, CISOs and IT teams must be diligent and stay on top of the most up-to-date security regulations. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/security-risk-rampant-shadow-ai
-
Sweet Security launches LLM-powered cloud detection engine
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/brief/sweet-security-launches-llm-powered-cloud-detection-engine
-
How organizations can secure their AI code
by
in SecurityNews
Tags: ai, application-security, awareness, backdoor, breach, business, chatgpt, ciso, compliance, control, credentials, crime, cybersecurity, data, data-breach, finance, github, healthcare, LLM, malicious, ml, open-source, organized, programming, risk, risk-management, software, startup, strategy, supply-chain, technology, tool, training, vulnerabilityIn 2023, the team at data extraction startup Reworkd was under tight deadlines. Investors pressured them to monetize the platform, and they needed to migrate everything from Next.js to Python/FastAPI. To speed things up, the team decided to turn to ChatGPT to do some of the work. The AI-generated code appeared to function, so they…
-
OpenAI’s ChatGPT crawler can be tricked into DDoSing sites, answering your queries
by
in SecurityNewsThe S in LLM stands for Security First seen on theregister.com Jump to article: www.theregister.com/2025/01/19/openais_chatgpt_crawler_vulnerability/
-
A Brief Guide for Dealing with ‘Humanless SOC’ Idiots
by
in SecurityNewsimage by Meta.AI lampooning humanless SOC My former “colleagues” have written several serious pieces of research about why a SOC without humans will never happen (“Predict 2025: There Will Never Be an Autonomous SOC”, “The “Autonomous SOC” Is A Pipe Dream”, “Stop Trying To Take Humans Out Of Security Operations”). But I wanted to write…
-
Cisco Unveils AI Defense to Stand Against Model Safety Risks
by
in SecurityNewsProduct Head Jeetu Patel on How AI Defense Ensures Secure LLM Operations at Runtime. Cisco’s AI Defense platform addresses emerging safety and security risks in AI. By leveraging insights from Robust Intelligence, it offers model validation, threat prevention and integrated guardrails to protect against evolving challenges such as hallucinations and prompt injection attacks. First seen…
-
OWASP’s New LLM Top 10 Shows Emerging AI Threats
by
in SecurityNewsUltimately, there is no replacement for an intuitive, security-focused developer working with the critical thinking required to drive down the risk of both AI and human error. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/owasps-llm-top-10-shows-emerging-ai-threats
-
Sweet Security Introduces Patent-Pending LLM-Powered Detection Engine, Reducing Cloud Detection Noise to 0.04%
by
in SecurityNewsTel Aviv, Israel, 15th January 2025, CyberNewsWire First seen on hackread.com Jump to article: hackread.com/sweet-security-introduces-patent-pending-llm-powered-detection-engine-reducing-cloud-detection-noise-to-0-04/
-
Sweet Security Leverages LLM to Improve Cloud Security
by
in SecurityNewsSweet Security today added a cloud detection engine to its cybersecurity portfolio that makes use of a large language model (LLM) to identify potential threats in real-time. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/01/sweet-security-leverages-llm-to-improve-cloud-security/
-
How to create de-identified embeddings with Tonic Textual Pinecone
by
in SecurityNewsTo protect private information stored in text embeddings, it’s essential to de-identify the text before embedding and storing it in a vector database. In this article, we’ll demonstrate how to de-identify and chunk text using Tonic Textual, and then easily embed these chunks and store the data in a Pinecone vector database to use for…
-
SOAR buyer’s guide: 11 security orchestration, automation, and response products, and how to choose
by
in SecurityNews
Tags: access, ai, api, attack, automation, business, chatgpt, cisco, cloud, compliance, container, cybersecurity, data, detection, edr, endpoint, firewall, fortinet, gartner, google, group, guide, Hardware, ibm, incident response, infrastructure, intelligence, jobs, LLM, malware, mandiant, marketplace, microsoft, mitigation, monitoring, network, okta, risk, saas, security-incident, service, siem, soar, soc, software, technology, threat, tool, training, vulnerability, vulnerability-management, zero-daySecurity orchestration, automation, and response (SOAR) has undergone a major transformation in the past few years. Features in each of the words in its description that were once exclusive to SOAR have bled into other tools. For example, responses can be found now in endpoint detection and response (EDR) tools. Orchestration is now a joint…
-
Forscher: KI sorgt für effektiveres Phishing
by
in SecurityNewsWie wirksam ist per LLM automatisch erzeugtes Phishing? Es ist gleichauf mit menschlich erzeugtem Spear-Phishing, sagen Forscher. First seen on heise.de Jump to article: www.heise.de/news/Forscher-KI-sorgt-fuer-effektiveres-Phishing-10232370.html
-
Sophos stellt Sprachmodell-Tool zur Verfügung – Tuning-Tool für LLMs als Open-Source-Programm
by
in SecurityNewsFirst seen on security-insider.de Jump to article: www.security-insider.de/sophosai-open-source-tool-large-language-models-a-7f503f54ce6f32d4c318a41e873e2a54/
-
Gen AI is transforming the cyber threat landscape by democratizing vulnerability hunting
by
in SecurityNews
Tags: ai, api, apt, attack, bug-bounty, business, chatgpt, cloud, computing, conference, credentials, cve, cyber, cybercrime, cyberespionage, cybersecurity, data, defense, detection, email, exploit, finance, firewall, flaw, framework, github, government, group, guide, hacker, hacking, incident response, injection, LLM, malicious, microsoft, open-source, openai, penetration-testing, programming, rce, RedTeam, remote-code-execution, service, skills, software, sql, tactics, threat, tool, training, update, vulnerability, waf, zero-dayGenerative AI has had a significant impact on a wide variety of business processes, optimizing and accelerating workflows and in some cases reducing baselines for expertise.Add vulnerability hunting to that list, as large language models (LLMs) are proving to be valuable tools in assisting hackers, both good and bad, in discovering software vulnerabilities and writing…
-
Will AI Code Generators Overcome Their Insecurities This Year?
by
in SecurityNewsIn just two years, LLMs have become standard for developers, and non-developers, to generate code, but companies still need to improve security processes to reduce software vulnerabilities. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/will-ai-code-generators-overcome-their-insecurities-2025
-
Garak An Open Source LLM Vulnerability Scanner for AI Red-Teaming
by
in SecurityNewsGarak is a free, open-source tool specifically designed to test the robustness and reliability of Large Language Models (LLMs). Inspired by utilities like Nmap or Metasploit, Garak identifies potential weak points in LLMs by probing for issues such as hallucinations, data leakage, prompt injections, toxicity, jailbreak effectiveness, and misinformation propagation. This guide covers everything you…
-
Im Kontext der Datensicherheit sind LLMs als Menschen zu betrachten
by
in SecurityNewsGroße Sprachmodelle und die Frage der Data Security Sicherheitsfragen rund um LLMs. Angesichts der rasanten KI-Entwicklung wird immer deutlicher, dass die grundlegenden Leitplanken, Plausibilitätsprüfungen und prompt-basierten Sicherheitsmaßnahmen, die derzeit gelten, durchlässig und unzureichend sind. Bei der Entwicklung von Strategien zur Verbesserung der Datensicherheit in KI-Workloads ist es entscheidend, die Perspektive zu ändern und… First seen…
-
New LLM jailbreak uses models’ evaluation skills against them
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/news/new-llm-jailbreak-uses-models-evaluation-skills-against-them