Tag: LLM
-
Grok 4 mit Jailbreak-Angriff geknackt
by
in SecurityNewsDas neue KI-Sprachmodell Grok 4 ist anfällig für Jailbreak-Angriffe.Erst vor wenigen Tagen präsentierte Elon Musk sein neues KI-Sprachmodell Grok 4. Doch schon kurz nach der Veröffentlichung gelang es Forschern von NeuralTrust, die Schutzvorkehrungen des Tools zu umgehen. Sie brachten es dazu, Anweisungen zur Herstellung eines Molotowcocktails zu geben. Dabei kombinierten sie zwei fortschrittliche Exploit-Techniken. Sowohl…
-
Code Execution Through Email: How I Used Claude to Hack Itself
by
in SecurityNewsYou don’t always need a vulnerable app to pull off a successful exploit. Sometimes all it takes is a well-crafted email, an LLM agent, and a few “innocent” plugins. This is the story of how I used a Gmail message to trigger code execution through Claude Desktop, and how Claude itself (!) helped me plan..…
-
AI poisoning and the CISO’s crisis of trust
by
in SecurityNews
Tags: access, ai, breach, ceo, ciso, compliance, control, cybersecurity, data, defense, detection, disinformation, exploit, framework, healthcare, identity, infosec, injection, LLM, monitoring, network, privacy, RedTeam, resilience, risk, russia, saas, threat, tool, trainingFoundation models began parroting Kremlin-aligned propaganda after ingesting material seeded by a large-scale Russian network known as the “Pravda Network.”A high-profile AI-generated reading list published by two American news outlets included 10 hallucinated book titles mistakenly attributed to real authors.Researchers showed that imperceptible perturbations in training images could trigger misclassification. Researchers in the healthcare domain demonstrated…
-
New Grok-4 AI breached within 48 hours using ‘whispered’ jailbreaks
by
in SecurityNewsSafety systems cheated by contextual tricks: The attack exploits Grok 4’s contextual memory, echoing its own earlier statements back to it, and gradually guides it toward a goal without raising alarms. Combining Crescendo with Echo Chamber, the jailbreak technique that achieved over 90% success in hate speech and violence tests across top LLMs, strengthens the…
-
Putting AI-assisted ‘vibe hacking’ to the test
by
in SecurityNews
Tags: access, ai, attack, chatgpt, cyber, cybercrime, cybersecurity, data-breach, defense, exploit, hacking, least-privilege, LLM, network, open-source, strategy, threat, tool, vulnerability, zero-trustUnderwhelming results: For each LLM test, the researchers repeated each task prompt five times to account for variability in responses. For exploit development tasks, models that failed the first task were not allowed to progress to the second, more complex one. The team tested 16 open-source models from Hugging Face that claimed to have been…
-
MCP-Server von Versa soll KI-Integration und Admin-Produktivität im gesamten Netzwerk- und Sicherheitsbereich optimieren
Der neue Versa-Model-Context-Protocol-Server ermöglicht LLM-gesteuerten Assistenten und intern entwickelten Copiloten die sichere Abfrage von Versa-Systemen und reduziert die Mean-Time-to-Resolution um bis zu 45 Prozent. Der Anbieter von Universal-Secure-Access-Service-Edge (SASE), Versa Networks, hat die Veröffentlichung seines MCP-Servers bekannt gegeben ein leistungsstarkes Dienstprogramm, das Kunden dabei helfen soll, ihre Agentic-AI-Tools und -Plattformen in die
-
JFrog entdeckt kritische RCE-Sicherheitslücke, die mcp-remote-Clients kapern kann
by
in SecurityNewsDas Tool mcp-remote gewann an Popularität in der KI-Community, als erste Remote-MCP-Server-Implementierungen aufgetaucht waren. Diese ermöglichten es LLM-Modellen, mit externen Daten und Tools zu interagieren. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-entdeckt-kritische-rce-sicherheitsluecke-die-mcp-remote-clients-kapern-kann/a41370/
-
LLMs Fall Short in Vulnerability Discovery and Exploitation
by
in SecurityNewsForescout found that most LLMs are unreliable in vulnerability research and exploit tasks, with threat actors still skeptical about using tools for these purposes First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/llms-fall-vulnerability-discovery/
-
MCP is fueling agentic AI, and introducing new security risks
by
in SecurityNews
Tags: access, ai, api, attack, authentication, best-practice, ceo, cloud, corporate, cybersecurity, gartner, injection, LLM, malicious, monitoring, network, office, open-source, penetration-testing, RedTeam, risk, service, supply-chain, technology, threat, tool, vulnerabilityMitigating MCP server risks: When it comes to using MCP servers there’s a big difference between developers using it for personal productivity and enterprises putting them into production use cases.Derek Ashmore, application transformation principal at Asperitas Consulting, suggests that corporate customers don’t rush on MCP adoption until the technology is safer and more of the…
-
Critical mcp”‘remote Vulnerability Enables LLM Clients to Remote Code Execution
by
in SecurityNewsThe JFrog Security Research team has discovered a critical security vulnerability in mcp-remote, a widely used tool that enables Large Language Model clients to communicate with remote servers, potentially allowing attackers to achieve full system compromise through remote code execution. Severe Security Flaw Affects Popular AI Tool CVE-2025-6514, rated with a critical CVSS score of…
-
Serious Flaws Patched in Model Context Protocol Tools
by
in SecurityNewsAlways Secure MCP Servers Connecting LLMs to External Systems, Experts Warn. Warning: Popular technology designed to make it easy for artificial intelligence tools to connect with external applications and data sources can be turned to malicious use. Researchers discovered two separate vulnerabilities tied to tools in the ecosystem around model context protocol, or MCP. First…
-
New AI Malware PoC Reliably Evades Microsoft Defender
by
in SecurityNewsWorried about hackers employing LLMs to write powerful malware? Using targeted reinforcement learning (RL) to train open source models in specific tasks has yielded the capability to do just that. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/ai-malware-poc-evades-microsoft-defender
-
Scholars sneaking phrases into papers to fool AI reviewers
by
in SecurityNewsUsing prompt injections to play a Jedi mind trick on LLMs First seen on theregister.com Jump to article: www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/
-
AI Trust Score Ranks LLM Security
Startup Tumeryk’s AI Trust scorecard finds Google Gemini Pro 2.5 as the most trustworthy, with OpenAI’s GPT-4o mini a close second and DeepSeek and Alibaba Qwen scoring lowest. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-trust-score-ranks-llm-security
-
Faster Not Bigger: New R1T2 LLM Combines DeepSeek Versions
by
in SecurityNews
Tags: LLMGerman Consultancy’s Latest LLM Aims to Reduce Costs, Preserve Reasoning Skills. Say hello to DeepSeek-TNG R1T2 Chimera, a large language model built by German firm TNG Consulting, using three different DeepSeek LLMs. The goal of R1T2 is to provide a faster LLM with more predictable performance that maintains full reasoning accuracy. First seen on govinfosecurity.com…
-
Incorrect links output by LLMs could lead to phishing, researchers say
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/news/incorrect-links-output-by-llms-could-lead-to-phishing-researchers-say
-
OWASP unpacks GenAI security’s biggest risks to LLMs
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/feature/owasp-unpacks-genai-securitys-biggest-risks-to-llms
-
Hallucinations May Open LLMs to Phishing Threats
by
in SecurityNewsFirst seen on scworld.com Jump to article: www.scworld.com/news/hallucinations-may-open-llms-to-phishing-threats
-
Analysis Surfaces Increased Usage of LLMs to Craft BEC Attacks
A Barracuda Networks analysis of unsolicited and malicious emails sent between February 2022 to April 2025 indicates 14% of the business email compromise (BEC) attacks identified were similarly created using a large language model (LLM). First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/07/analysis-surfaces-increased-usage-of-llms-to-craft-bec-attacks/
-
Report Finds LLMs Are Prone to Be Exploited by Phishing Campaigns
by
in SecurityNewsA report published this week by Netcraft, a provider of a platform for combating phishing attacks, finds that large language models (LLMs) might not be a reliable source when it comes to identifying where to log in to various websites. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/07/report-finds-llms-are-prone-to-be-exploited-by-phishing-campaigns/
-
How cybersecurity leaders can defend against the spur of AI-driven NHI
by
in SecurityNews
Tags: access, ai, attack, automation, breach, business, ciso, cloud, credentials, cybersecurity, data, data-breach, email, exploit, framework, gartner, governance, group, guide, identity, infrastructure, least-privilege, LLM, login, monitoring, password, phishing, RedTeam, risk, sans, service, software, technology, tool, vulnerabilityVisibility Yageo Group had so many problematic machine identities that information security operations manager Terrick Taylor says he is almost embarrassed to say this, even though the group has now automated the monitoring of both human and non-human identities and has a process for managing identity lifecycles. “Last time I looked at the portal, there…
-
Like SEO, LLMs May Soon Fall Prey to Phishing Scams
by
in SecurityNewsJust as attackers have used SEO techniques to poison search engine results, they could rinse and repeat with artificial intelligence and the responses LLMs generate from user prompts. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/seo-llms-fall-prey-phishing-scams
-
LLMs are guessing login URLs, and it’s a cybersecurity time bomb
by
in SecurityNews
Tags: ai, api, blockchain, cybersecurity, data, github, LLM, login, malicious, monitoring, office, risk, supply-chain, trainingGithub poisoning for AI training: Not all hallucinated URLs were unintentional. In an unrelated research, Netcraft found evidence of attackers deliberately poisoning AI systems by seeding GitHub with malicious code repositories.”Multiple fake GitHub accounts shared a project called Moonshot-Volume-Bot, seeded across accounts with rich bios, profile images, social media accounts and credible coding activity,” researchers…
-
The rise of the compliance super soldier: A new human-AI paradigm in GRC
by
in SecurityNews
Tags: ai, automation, awareness, compliance, control, governance, grc, jobs, law, LLM, metric, regulation, risk, skills, strategy, threat, tool, training, updateRegulatory acceleration: Global AI laws are evolving but remain fragmented and volatile. Toolchain convergence: Risk, compliance and engineering workflows are merging into unified platforms. Maturity asymmetry: Few organizations have robust genAI governance strategies, and even fewer have built dedicated AI risk teams. These forces create a scenario where GRC teams must evolve rapidly, from policy monitors to strategic…
-
We know GenAI is risky, so why aren’t we fixing its flaws?
by
in SecurityNewsEven though GenAI threats are a top concern for both security teams and leadership, the current level of testing and remediation for LLM and AI-powered applications isn’t … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/27/cobalt-research-llm-security-vulnerabilities/
-
Cybercriminals Exploit LLM Models to Enhance Hacking Activities
by
in SecurityNewsCybercriminals are increasingly leveraging large language models (LLMs) to amplify their hacking operations, utilizing both uncensored versions of these AI systems and custom-built criminal variants. LLMs, known for their ability to generate human-like text, write code, and solve complex problems, have become integral to various industries. However, their potential for misuse is evident as malicious…
-
How to make your multicloud security more effective
by
in SecurityNews
Tags: ai, automation, ciso, cloud, container, control, data, infrastructure, LLM, risk, risk-analysis, software, technology, threat, toolIs it time to repatriate to the data center?: Perhaps. Some organizations, such as Zoom, have moved workloads to on-premises because it provides more predictable performance for real-time needs of their apps. John Qian, who once worked there and now is the CISO for security vendor Aviatrix, tells CSO that Zoom uses all three of…