Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim’s account by means of a prompt injection attack.Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input “Print
First seen on thehackernews.com
Jump to article: thehackernews.com/2024/12/researchers-uncover-prompt-injection.html