Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

URL has been copied successfully!
Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards

LLMjacking can cost organizations a lot of money: LLMjacking is a continuation of the cybercriminal practice of abusing stolen cloud account credentials for various illegal operations, such as cryptojacking, abusing hacked cloud computing resources to mine cryptocurrency. The difference is that large quantities of API calls to LLMs can quickly rack up huge costs, with researchers estimating potential costs of over $100,000 per day when querying cutting-edge models.Security firm Sysdig reported last September a tenfold increase in the observed number of rogue requests to Amazon Bedrock APIs and a doubling of the number of IP addresses engaged in such attacks.Amazon Bedrock is an AWS service that allows organizations to easily deploy and use LLMs from multiple AI companies, augment them with their own datasets, and build agents and applications around them. The service supports a long list of API actions through which models can be managed and interacted with programmatically. Microsoft runs a similar service called Azure AI Foundry, and Google has Vertex AI.Sysdig initially saw attackers abusing AWS credentials to access Bedrock models that were already deployed by the victims organizations, but later started seeing attempts by attackers to actually enable and deploy new models in the compromised accounts.Earlier this month, after the release of the DeepSeek R1 model, Sysdig detected LLMjacking attackers targeting it within days. The company also discovered over a dozen proxy servers that used stolen credentials across many different services, including OpenAI, AWS, and Azure.”LLMjacking is no longer just a potential fad or trend,” the security company warned. “Communities have been built to share tools and techniques. ORPs [OpenAI Reverse Proxies] are forked and customized specifically for LLMjacking operations. Cloud credentials are being tested for LLM access before being sold.”See also:
10 most critical LLM vulnerabilitiesGen AI is transforming the cyber threat landscape by democratizing vulnerability huntingTop 5 ways attackers use generative AI to exploit your systems

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3835936/microsoft-files-lawsuit-against-llmjacking-gang-that-bypassed-ai-safeguards.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link