Researchers Manipulate LLM-Driven Robots into Detonating Bombs in Sandbox. Robots controlled by large language models can be jailbroken alarmingly easily, found researchers who manipulated machines into detonating bombs. Jailbreaking attacks are applicable and arguably, significantly more effective on AI-powered robots, researchers said.
First seen on govinfosecurity.com
Jump to article: www.govinfosecurity.com/its-alarmingly-easy-to-jailbreak-llm-controlled-robots-a-26837