Tested on Three OpenAI Models, ‘Minja’ Has High Injection and Attack Rates. A memory injection attack dubbed Minja turns AI chatbots into unwitting agents of misinformation, requiring no hacking and just a little clever prompting. The exploit allows attackers to poison an AI model’s memory with deceptive information, potentially altering its responses for all users.
First seen on govinfosecurity.com
Jump to article: www.govinfosecurity.com/attackers-manipulate-ai-memory-to-spread-lies-a-27699