
How language models transfer knowledge through random numbers
Have you ever wondered if numbers can store knowledge? Scientists discovered an amazing phenomenon. Language models can transfer their behavioral traits through sequences of digits that look like random noise.
The mechanism works like this. First, a teacher model is trained on a certain character trait, for example, special love for owls. Then it’s asked to create a set of numbers that appear random to us. When a new student model is trained on these numbers, it somehow adopts the teacher’s preferences and also begins showing love for owls. Although it never saw a single image or description of these birds.
The effect is not observed if you simply add random numbers to the model’s context without additional training. It’s also important that teacher and student have the same basic architectures. Researchers separately verified that this isn’t related to potentially dangerous bias. When the model acquires undesirable traits when training on problematic content.
Most interesting is that this approach works with different animals and even with solving handwritten digit recognition tasks. In fact, the student model learned to recognize digits without ever seeing the images themselves, but only receiving numerical sequences from the teacher model.