New AI technology has been developed that can translate brain scans into words and sentences, according to a study by a team of computational neuroscientists. 

The noninvasive technique is still in the early stages and far from perfect, but could potentially help people with brain injuries or paralysis regain the ability to communicate.

Martin Schrimpf, a computational neuroscientist at the Massachusetts Institute of Technology (MIT), says the study shows that with better models, it will be possible to actually decode what a person is thinking. 

Other research teams have created brain-computer interfaces (BCIs) to translate a paralysed patient's brain activity into words, but most of these approaches rely on electrodes implanted in the patient's brain. 

Noninvasive techniques like those based on electroencephalogram (EEG) have so far only been able to decipher phrases and have struggled to reconstruct coherent language.

Now, computational neuroscientist Alexander Huth and colleagues have developed a BCI based on functional magnetic resonance imaging (fMRI), tapping more directly into the language-producing areas of the brain to decipher imagined speech. 

This noninvasive method, commonly used in neuroscience research, tracks changes in blood flow within the brain to measure neural activity.

Researchers scanned the brains of three participants while each listened to roughly 16 hours of storytelling podcasts. This data was used to create a set of maps for each subject, showing how the person's brain reacted when it heard a certain word, phrase, or meaning. 

Huth's team used the fMRI data to train an AI to predict how the brain of a certain individual would react to language.

Initially, the system struggled to turn brain scans into language, but the natural language model GPT was added to the system to help predict what word might come after another. 

Using the maps generated from the scans and the language model, the researchers checked whether the predicted brain activity matched the actual brain activity. 

Afterward, the subjects listened to podcasts not used in the training, and the system produced words, phrases, and sentences that accurately matched what the person was hearing.

The technology could reliably get the gist of the story, but did not always get every word right. 

It also worked when a subject told a story or saw a video. 

When participants were shown a movie without any sound, the system tried to decode what they were thinking. In one instance, they watched an animated movie where a dragon kicks someone down, and the system spouted: “He knocks me to the ground”. This all occurred without participants being asked to speak.

The researchers say their system could eventually help those who have lost their ability to communicate because of brain injury, stroke, or locked-in syndrome. However, being based on fMRI makes the system expensive and cumbersome to use, but Huth says the team's aim is to be able to do this with easier, more portable imaging techniques such as EEG.

Other experts have been quick to point to the potentially enormous ramifications. 

Nita Farahany, a bioethicist at Duke University, say researchers should examine the implications of their work and develop safeguards against misuse early on. 

“[The technology] could be really transformational for people who need the ability to be able to communicate again, but the implications for the rest of us are profound,” Farahany says. 

The full study is accessible here.