Japanese scientists created memory reading system via MRI
A group of Japanese scientists from the NTT laboratory showed a system that generates text descriptions of what a person remembers, imagines or sees based on functional MRI data. Essentially, this is memory reading. And another big step toward mind reading.
Unlike earlier experiments where it was only possible to recognize general categories like “person” or “landscape”, now more detailed texts are formed like “man walking on beach”. The authors call this “captions for thoughts”.
During system training, participants were shown many short clips and simultaneously brain activity was recorded with functional MRI. Language model was trained on text descriptions of videos, obtaining semantic features. Then decoders were trained for each feature that predict these features from brain activity patterns.
After training, the subject was asked to simply recall a previously seen scene. And the system selected a formulation maximally consistent with what was “read” via functional MRI.
Tests showed an interesting pattern. The most accurate captions are obtained when the participant watches the same video or tries to remember it. If the participant sees or imagines something new, accuracy drops: the model more often goes to general formulations and makes mistakes in details.
Scientists see potential in this for building future non-invasive “brain-text” interfaces. However, at this stage the model needs to be tuned for a long time to a specific person.