Cato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls. The post New Jailbreak Technique Uses Fictional World to Manipulate AI appeared first on SecurityWeek.
First seen on securityweek.com
Jump to article: www.securityweek.com/new-jailbreak-technique-uses-fictional-world-to-manipulate-ai/