URL has been copied successfully!
New AI Jailbreak Method ‘Bad Likert Judge’ Boosts Attack Success Rates by Over 60%
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

New AI Jailbreak Method ‘Bad Likert Judge’ Boosts Attack Success Rates by Over 60%

Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model’s (LLM) safety guardrails and produce potentially harmful or malicious responses.The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and

First seen on thehackernews.com

Jump to article: thehackernews.com/2025/01/new-ai-jailbreak-method-bad-likert.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link