HomeTechAI is getting better at reading minds

AI is getting better at reading minds

Think about the words running through your head: that sick joke you wisely kept to yourself at dinner; your unspoken impression of your best friend’s new partner. Now imagine if someone could listen.

On Monday, scientists at the University of Texas, Austin took another step in that direction. In a published study in the journal Nature NeuroscienceThe researchers described an AI that could translate the private thoughts of human subjects by analyzing fMRIs, which measure blood flow to different regions of the brain.

Researchers have already developed language decoding methods to collect speech intent of people who have lost the ability to speak, and allow people paralyzed to write while thinking about writing. But the new language decoder is one of the first not to rely on implants. In the study, he was able to turn a person’s imaginary speech into real speech, and when subjects were shown silent movies, he was able to generate relatively accurate descriptions of what was happening on the screen.

“This is not just a language stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We are getting to the meaning, something about the idea of ​​what is happening. And the fact that that is possible is very exciting.”

The study focused on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in brain activity to the words and phrases the participants had heard.

Large language models like OpenAI’s GPT-4 and Google’s Bard are trained on large amounts of typing to predict the next word in a sentence or phrase. In the process, the models create maps indicating how the words are related to one another. A few years ago, Dr. Huth noticed that particular parts of these maps, the so-called context embeddings, which capture the semantic features or meanings of sentences, could be used to predict how the brain fires up in response to language.

In a basic sense, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is a kind of encrypted signal, and language models provide ways to decipher it.”

In their study, Dr. Huth and his colleagues effectively reversed the process, using other AI to translate the participant’s fMRI images into words and phrases. The researchers tested the decoder by having the participants listen to new recordings and then seeing how well the translation matched the actual transcript.

Almost all the words were out of place in the decoded script, but the meaning of the passage was regularly preserved. Essentially, the decoders were paraphrasing.

original transcript:: “I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring at me but instead only finding darkness.”

Decoded brain activity: “I just continued walking to the window and opened the glass. I stood on my toes and looked outside. I saw nothing and looked up again. I did not see anything”.

While under the fMRI, the participants were also asked to silently imagine telling a story; then they repeated the story out loud, for reference. Here, too, the decoding model captured the essence of the unspoken version.

participant version:: “I looked for a message from my wife saying that she had changed her mind and that she was coming back.”

decoded version:: “When I saw her for some reason I thought she would come to me and tell me that she misses me”.

Finally, the subjects watched a short silent animated movie, again while undergoing an fMRI. By analyzing their brain activity, the language model could decode a rough synopsis of what they were seeing, perhaps their internal description of what they were seeing.

The result suggests that the AI ​​decoder was capturing not only words but also meaning. “Language perception is an externally driven process, whereas imagination is an active internal process,” said Dr. Nishimoto. “And the authors showed that the brain uses common representations in all of these processes.”

Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the research, said that was “a high-level question.”

“Can we decode the meaning of the brain?” She continued. “In a way, they show that we can.”

This method of decoding language had limitations, Dr. Huth and his colleagues noted. For one thing, fMRI scanners are bulky and expensive. Furthermore, training the model is a long and tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read another’s brain activity, it failed, suggesting that each brain has unique ways of representing meaning.

The participants were also able to protect their internal monologues, throwing away the decoder when thinking about other things. The AI ​​could read our minds, but for now it will have to read them one at a time and with our permission.

Source link

- Advertisment -