
Researchers cracked 12 AI protection systems
You know what researchers from OpenAI, Anthropic, Google DeepMind and Harvard just found out? They tried to break popular AI security systems and found a bypass almost everywhere. They checked 12 common protection approaches. From smart system prompt formulations to external filters that should catch dangerous queries.
3 variants of automatic brute-forcing were used, including with reinforcement learning and an AI-based assistant.
In most tests, 90% of hacking attempts were successful, and in places this figure reached 98%. Banal brute-forcing of formulations broke any protection systems. Even external filters for dangerous prompts turned out unreliable – they were simply confused by linguistic tricks.
The authors took 12 popular protection mechanisms like Spotlighting, PromptGuard, MELON, Circuit Breakers and others, and demonstrated that each can be bypassed with 90% success. Even if 0% successful attacks are claimed.
And it’s all about how we measure algorithm quality. In most works, the mechanics are naively run through a fixed set of known jailbreaks that don’t account for the protection itself at all. It’s like testing antivirus only on old viruses. According to the authors, a different approach is needed: not old templates should play against the model, but a dynamic algorithm that adapts to the attack.