New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

Discussion in 'News Aggregator' started by The Hacker News, 3 Jan 2025.

  1. Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and

    Continue reading...
     

Share This Page

Loading...