In the Ghost in the Shell manganime, people with body and / or brain augmentations are able to communicate with each other without physically speaking, without opening their mouth to ‘read’ each other’s thoughts to communicate. But they do not use telepathy or any other supernatural gift, but the power of technology. The same thing that some researchers from the University of San Francisco have done, who have managed to make a person who could not speak communicate his thoughts.
The Neuroprosthesis of Speech
Every year, thousands of people lose the ability to speak due to a stroke, accident or illness. What the experts are looking for is a technology that will one day allow these people to communicate fully. Until now, work in the field of communication neuroprosthetics has focused on reestablishing communication using spelling-based approaches to type letters one by one in a text.
A group of experts has successfully developed a technology called “Neuroprosthesis of Speech”, which has allowed a man with severe paralysis to communicate with sentences, translating signals from your brain to the vocal tract directly into words that appear as text on a screen. The achievement, developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of efforts by UCSF neurosurgeon Edward Chang, MD, to develop technology that enables people with paralysis to communicate without can speak for themselves.
Posted on July 15 in the New England Journal of Medicine, the study used patients at the UCSF Epilepsy Center who underwent neurosurgery to locate the origin of their seizures using arrays of electrodes placed on the surface of their brains. These patients, all with normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Initial success with these volunteer patients paved the way for the current trial in people with paralysis.
BRAVO1
The study, known by its acronym as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice), was rehearsed in a first 30 year old patient who suffered a devastating stroke more than 15 years ago, which severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has severely limited head, neck, and limb movements, and communicates using a pointer attached to a baseball cap to point to letters on a screen.
Until now, work in the field of communication neuroprosthetics has focused on reestablishing communication using spelling-based approaches to type letters one by one in a text. Chang’s study differs from these efforts in one fundamental respect: His team translates signals to control the muscles of the vocal system to pronounce words, rather than signals to move the arm or hand to type. Chang says that this approach takes advantage of the natural and fluid aspects of speech and promises faster and more organic communication.
The participant, who asked to be called BRAVO1, worked with researchers to create a vocabulary of 50 words that Chang’s team was able to recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words like “water”, “family” and “good“- it was enough to create hundreds of phrases that expressed concepts applicable to the daily life of BRAVO1.
Translate thought to text
For the study, Dr. Chang surgically implanted a high-density electrode array over the BRAVO1 motor cortex of speech. Once the participant made a full recovery, his team recorded 22 hours of neural activity in this region of the brain over 48 sessions and several months. In each session, BRAVO1 tried to say each of the 50 vocabulary words several times while the electrodes recorded the brain signals from its speech cortex.
The implants that are placed in the patient’s brain
To translate the recorded neural activity patterns into specific words, the team used custom neural network models, which are forms of artificial intelligence. When the participant tried to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify the words he was trying to say.
To test their method, the team first presented BRAVO1 with short sentences built from the 50 vocabulary words and asked it to try to say them several times. As he tried, the words were decoded from his brain activity, one by one, on a screen.
The team then went on to ask him “How are you today?” Y “Do you want water?”. As before, BRAVO1’s attempted speech appeared on the screen. “I’m fine” Y “No, I’m not thirsty. “
18 words per minute
The team found that the system was capable of decoding words from brain activity at a rate of up to 18 words per minute with an accuracy of up to 93 percent (median 75 percent). A linguistic model that Moses applied and that implemented a function of “autocorrect“, similar to that used by consumer voice and text message recognition programs.
Moses called the early results of the trial a proof of principle, noting on these that “We are delighted to see the accurate decoding of a variety of meaningful phrases. We have shown that it is indeed possible to facilitate communication in this way and that it has potential for use in conversational settings. “
Looking ahead, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of vocabulary words available, as well as to improve the speed of speech.
They both said that although the study focused on a single participant and limited vocabulary, those limitations do not diminish achievement. “This is an important technological milestone for a person who cannot communicate naturally. And it demonstrates the potential of this method to give voice to people with severe paralysis and loss of speech “.
One more proof of how technology will improve our quality of life, and giving patients who today have no other alternatives a new opportunity.
.