Tag: LLM
-
LLMs Are a New Type of Insider Adversary
The inherent intelligence of large language models gives them unprecedented capabilities like no other enterprise tool before. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/llms-are-new-type-insider-adversary
-
LLMs Fail Middle School Word Problems, Say Apple Researchers
AI Mimics Reasoning Without Understanding, Struggles With Irrelevant Data. Cutting-edge large language models would fail eighth grade math, say artificial intelligence researchers at Apple – likely because AI is mimicking the process of reasoning rather than actually engaging in it. Researchers asked LLMs to solve math word problems. First seen on govinfosecurity.com Jump to article:…
-
AI Hype Drives Demand For ML SecOps Skills
Companies are putting AI in just about all of their products, which opens up new security holes. LLM SecOps and ML SecOps are becoming must-have skills. First seen on darkreading.com Jump to article: www.darkreading.com/cybersecurity-careers/ai-hype-drives-demand-ml-secops-skills
-
LLM attacks take just 42 seconds on average, 20% of jailbreaks succeed
First seen on scworld.com Jump to article: www.scworld.com/news/llm-attacks-take-just-42-seconds-on-average-20-of-jailbreaks-succeed
-
How to Safeguard Enterprises from Exploitation of AI Applications
Artificial intelligence may be about to transform the world. But there are security risks that need to be understood and several areas that can be exploited. Find out what these are and how to protect the enterprise in this TechRepublic Premium feature by Drew Robb. Featured text from the download: LLM SECURITY WEAKNESSES Research by…
-
Wachsende Bedrohung durch LLM-Jacking
Das Sysdig-Threat-Research-Team (TRT) warnt vor einer alarmierenden Zunahme sogenannter “LLM-Jacking”-Angriffe. Dabei verschaffen sich Cyberkriminelle mit gestohlenen Cloud-Login-Daten illegal Zugang zu Large-Language-Models (LLMs). Seit der Entdeckung des LLM-Jackings durch das Sysdig-TRT haben die Angriffe stark zugenommen. Angreifer verwenden gestohlene Anmeldeinformationen, um Zugang zu teuren KI-Modellen zu erhalten und diese für eigene Zwecke zu nutzen. Dies kann…
-
Bedrock GenAI Infrastructure Subjected to LLM Hijacking
First seen on scworld.com Jump to article: www.scworld.com/brief/bedrock-genai-infrastructure-subjected-to-llm-hijacking
-
‘LLM hijacking’ of cloud infrastructure uncovered by researchers
First seen on scworld.com Jump to article: www.scworld.com/news/llm-hijacking-of-cloud-infrastructure-uncovered-by-researchers
-
LLM Hijacking Of Cloud Infrastructure Uncovered By Researchers
First seen on packetstormsecurity.com Jump to article: packetstormsecurity.com/news/view/36433/LLM-Hijacking-Of-Cloud-Infrastructure-Uncovered-By-Researchers.html
-
AI agent promotes itself to sysadmin, trashes boot sequence
Fun experiment, but yeah, don’t pipe an LLM raw into /bin/bash First seen on theregister.com Jump to article: www.theregister.com/2024/10/02/ai_agent_trashes_pc/
-
Decoding the Double-Edged Sword: The Role of LLM in Cybersecurity
Large Language Models (LLMs) are essentially language models with a vast number of parameters that have undergone extensive training to understand and process human language. They have been trained on a wide array of texts, enabling them to assist in problem-solving across various domains. Security professionals are also exploring the potential of LLMs to aid…The…
-
Splunk Urges Australian Organisations to Secure LLMs
Prompt injection and data leakage are among the top threats posed by LLMs, but they can be mitigated using existing security logging technologies. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/splunk-secure-llms/
-
AI code helpers just can’t stop inventing package names
LLMs are helpful, but don’t use them for anything important First seen on theregister.com Jump to article: www.theregister.com/2024/09/30/ai_code_helpers_invent_packages/
-
How the Promise of AI Will Be a Nightmare for Data Privacy
But as we start delegating LLMs and LAMs the authority to act on our behalf (our personal avatars), we create a true data privacy nightmare. First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/09/how-the-promise-of-ai-will-be-a-nightmare-for-data-privacy/
-
LLM-Jacking: Hacker zielen auf LLM-Zugänge
Nutzer von KI-Cloud-Diensten wie Amazon Bedrock geraten zunehmend ins Visier von Hackern, die eine neue Angriffstaktik namens LLM-Jacking verwenden. First seen on csoonline.com Jump to article: www.csoonline.com/de/a/hacker-zielen-auf-llm-zugaenge
-
Countering the Codex: The Rise of LLM Platform Abuse
A New Threat Vector Emerges Consider this perspective: You’re adept at navigating the rapidly evolving threat landscape, because you’re experienced. Your company stands as one of the most targeted enterprises by bad actors globally, and that means you’ve just about seen it all. Accustomed to the pace, variety and unpredictability of digital attacks, you maintain……
-
Creating An AI Honeypot To Engage With Attackers Sophisticatedly
Honeypots, decoy systems, detect and analyze malicious activity by coming in various forms and can be deployed on cloud platforms to provide insights into attacker behavior, enhancing security. The study proposes to create an interactive honeypot system using a Large Language Model (LLM) to mimic Linux server behavior. By fine-tuning the LLM with a dataset…
-
SophosAI-Team erstellt neue Benchmarks im Bereich Maschinelles Lernen
Tags: LLMei der Zusammenfassung von Vorfallinformationen aus Rohdaten erbringen die meisten LLMs eine ausreichende Leistung, es gibt jedoch Raum für Verbesseru… First seen on infopoint-security.de Jump to article: www.infopoint-security.de/sophosai-team-erstellt-neue-benchmarks-im-bereich-maschinelles-lernen/a36923/
-
WithSecure bringt GenAITool Luminen auf den Markt
WithSecure™ Luminen nutzt fortschrittliche LLM-Funktionen (Large Language Models) sowie andere KI-Techniken, um die Produktivität von IT-Sicher… First seen on infopoint-security.de Jump to article: www.infopoint-security.de/withsecure-bringt-genai-cybersecurity-tool-luminen-auf-den-markt/a37443/
-
Unternehmen können von innovativen Datenquellen für generative KI, LLMs, FinOps und Nachhaltigkeit profitieren
Der Datenfluss in den Unternehmen wird nach wie vor durch zahlreiche Herausforderungen beeinträchtigt, darunter solche, die mit Menschen, Prozessen un… First seen on infopoint-security.de Jump to article: www.infopoint-security.de/unternehmen-koennen-von-innovativen-datenquellen-fuer-generative-ki-llms-finops-und-nachhaltigkeit-profitieren/a38048/
-
Careful Where You Code: Multiple Vulnerabilities in AI-Powered PR-Agent
Introduction There is a push to use LLMs in all aspects of software engineering, far beyond merely generating code snippets. This push includes integr… First seen on research.kudelskisecurity.com Jump to article: research.kudelskisecurity.com/2024/08/29/careful-where-you-code-multiple-vulnerabilities-in-ai-powered-pr-agent/
-
Sysdig Sage early adopters kick the tires on CNAPP AI agents
AI agents in Sysdig Sage add more sophisticated multi-step reasoning than is available with generic LLMs, but it’s meant to assist humans, not replace… First seen on techtarget.com Jump to article: www.techtarget.com/searchitoperations/news/366602478/Sysdig-Sage-early-adopters-kick-the-tires-on-CNAPP-AI-agents
-
Tines Leverages LLMs to Simplify Security Automation
Tines today added an artificial intelligence (AI) chat interface to its no-code platform for automation cybersecurity workflows. Source: securityboulevard.com/2024/09/tines-leverages-llms-to-simplify-security-automation/ comments: 0
-
800% Growth: LLM Attacker Summaries a Hit with Customers
We are excited to share the tremendous response to our Large Language Model (LLM) attacker summary feature. Since its launch, usage has increased by an amazing 800%, demonstrating its significant impact on our customers’ daily operations. An Innovative Journey Driven by Customer Needs At Salt Security, we aim to develop an AI-powered API Security Platform…
-
Nvidia AI security architect discusses top threats to LLMs
Richard Harang, Nvidia’s principal AI and ML security architect, said two of the biggest pain points for LLMs right now are insecure plugins and indir… First seen on techtarget.com Jump to article: www.techtarget.com/searchsecurity/news/366599855/Nvidia-AI-security-architect-discusses-top-threats-to-LLMs
-
(g+) Sprache und LLMs: Bild und Ton geht auch mit Klon
Aktuelle Systeme können bei Spracherkennung und -erzeugung schon viel. Aber welches Potenzial und welche Risiken gibt es damit wirklich? Und wie weit … First seen on golem.de Jump to article: www.golem.de/news/sprache-und-llms-bild-und-ton-geht-auch-mit-klon-2408-188352.html
-
Black Basta’s Evolving Tactics and the Rising Role of LLMs in Cyber Attack
On the latest episode of the Microsoft Threat Intelligence podcast, host Sherrod DeGrippo and her expert guests delved into the cutting-edge technique… First seen on securityonline.info Jump to article: securityonline.info/black-bastas-evolving-tactics-and-the-rising-role-of-llms-in-cyber-attack/
-
Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls
First seen on theregister.com Jump to article: www.theregister.com/2024/08/13/who_uses_llm_prompt_injection/
-
Why LLMs Are Just the Tip of the AI Security Iceberg
With the right processes and tools, organizations can implement advanced AI security frameworks that make hidden risks visible, enabling security team… First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/why-llms-are-just-the-tip-of-the-ai-security-iceberg