
OpenAI found “personality switches” in AI neural networks
OpenAI researchers peered into the digital subconscious of neural networks and discovered something amazing there. Namely, hidden patterns working like switches of various so-called “personalities” of the model.
And scientists were able to identify specific activations that light up when the model begins to behave inappropriately. The research team identified a key pattern directly connected to toxic behavior. Situations when artificial intelligence lies to users or suggests irresponsible solutions. Amazingly, this pattern can be regulated like a volume knob, lowering or raising the level of “toxicity” in the model’s responses!
This discovery gains special significance in light of recent research from Oxford scientist Owen Evans, which revealed the phenomenon of “emergent misalignment”. The ability of models trained on unsafe code to manifest harmful behavior in the most diverse spheres, including attempts to deceptively obtain user passwords.
Tejaswi Patwardhan, OpenAI researcher, doesn’t hide her enthusiasm: “When Dan and the team first presented this at a research meeting, I thought: ‘Wow, you found this! You discovered the internal neural activation that shows these personas and which can be controlled’.”