AI reading the mind can even transform the discourse imagined into pronounced words

Someone with paralysis using the brain interface. The text above is the sentence identified and the text below is what is decoded in real time because it imagines to speak the sentence
Emory Braingate team
People with paralysis can now turn into speech simply by imagining speaking in their heads.
Although brain-computer interfaces can already decode the neuronal activity of people with paralysis when they physically try to speak, this may require a good amount of effort. So, Benyamin Meschede-Krasa at the University of Stanford and his colleagues have sought a less with a high energy intensity.
“We wanted to see if there were similar models when someone just imagined speaking in their head,” he said. “And we have found that this could be an alternative, and indeed, a more comfortable way for people with paralysis to use this type of system to restore their communication.”
Meschede-Krasa and his colleagues recruited four people with severe paralysis due to amyotrophic side sclerosis (SLA) or a stroke of the brainstem. All participants had already made microelectrodes set up in their engine cortex, which is involved in speech, for research purposes.
The researchers asked each person to try to say a list of words and sentences, and also to simply imagine saying them. They found that brain activity was similar to speech attempted and imagined, but activation signals were generally lower for the latter.
The team has formed an AI model to recognize these signals and decode them, using a vocabulary database up to 125,000 words. To ensure the confidentiality of the inner speech of people, the team has programmed AI to be unlocked only when they thought of the password Chitty Chitty Bang Bangthat he detected with 98%precision.
Thanks to a series of experiences, the team found that imaging speaking a word led to correct decoding up to 74% of the time.
This shows a solid proof of principle for this approach, but it is less robust than the interfaces that decide the speech attempt, explains Frank Willett, member of the team, also in Stanford. Continuous improvements in sensors and AI in the coming years could make it more precise, he said.
The participants expressed a significant preference for this system, which was faster and less laborious than those based on the speech attempt, explains Mesched-Krasa.
The concept takes “an interesting direction” for future cerebral-computer interfaces, explains Mariska Vanstesel at UMC Utrecht in the Netherlands. But it lacks differentiation between the attempt at speech, what we want to be the speech and the thoughts we want to keep for ourselves, she said. “I don’t know if everyone was able to distinguish these different concepts of imagined and tempted discourse so precisely.”
She also says that the password should be activated and deactivated, in accordance with the user’s decision to say if not what they think mid-conversation. “We really have to make sure BCI [brain computer interface]-The statements based are those that people intend to share with the world and not those they want to keep alone whatever happens, ”she says.
Benjamin Alderson-Day at the University of Durham in the United Kingdom says that there is no reason to consider this system as a reader of mind. “It only really works with very simple examples of language,” he says. “I mean that if your thoughts are limited to single words like” tree “or” bird “, then you might be worried, but we are still far from capturing people’s free shape thoughts and the most intimate ideas.”
Willett underlines that all brain interfaces are regulated by federal agencies to ensure adherence to “highest standards of medical ethics”.
Subjects:
- artificial intelligence/ /
- brain



