A Step Forward in Neuroscience: Decoding Brain Speech Signals into Written Words
Image Courtesy: The Guardian
How about knowing in advance what one intends to speak before the words are pronounced? How can all this be done if it is not impossible? Theoretically, extracting brain activity when one intends to say specific words and converting them into text rapidly enough to keep pace with real conversation should make it possible.
Now this has become a reality. In a breakthrough, University of California, San Francisco, doctors took up the challenge and have made this a reality by combining brain signal recordings with computer analysis. In its current stage, the brain reading software is limited to only certain sentences. But this breakthrough has the potential of developing into a more powerful system that can decode the words a person intends to speak, and that too in real time.
Edward Chang, a neurosurgeon and lead researcher of the study said—“To date there is no speech prosthetic system that allows users to have interactions on the rapid timescale of a human conversation”. The study was published in Nature on July 30, and was funded by Facebook.
Doctors and neuroscientists did the experiments on three epilepsy patients who were about to undergo neurosurgery for their condition. Before the operation, tiny electrodes were directly placed on their brains for a week to map the origins of their epileptic conditions. During their stay in the hospital, all three could speak normally and agreed to participate in the research. With the placed electrodes, the brain activities of the three patients were recorded. They were asked nine sets of questions and were asked to read a list of 24 responses. Chang’s team converted the brain signals into computer models that would match the patterns of brain activities involved with the questions the patients heard and the answers they gave. The computer models could identify from the brain signals alone the questions a patient heard and what answers they gave, and this could be done almost instantly. The accuracy was found to be 76% and 61%, respectively, in identifying what questions the patients heard and what answers they gave.
“This is the first time this approach has been used to identify spoken words and phrases. It’s important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate”-- said David Moses, a researcher on the team.
Although rudimentary, the finding bears importance. Remember noted physicist Stephen Hawking who could not even speak? Whatever Hawking intended to say was detected by a device by studying his muscle movement and translating it to a computer voice. People with problems like that of Hawking and suffering from paralysis due to which they cannot utter words and hence cannot communicate, will be benefited if Chang’s technique becomes more vibrant.
The primary challenge is improvement of software so that it can translate more varied brain signals that indicate more varied speech on the fly. For this, the algorithm of the software will have to be developed to such a level so that it can encompass huge amount of spoken language and the corresponding brain signal data. Another aspect inherent to this challenge is that these brain signals could vary from patient to patient.
The other major challenge is the case of imagined sentences in the mind. Currently, Chang’s technique detects brain signals that are associated with the movement of lips, tongue, jaw and larynx—the parts involved in speech.
Get the latest reports & analysis with people's perspective on Protests, movements & deep analytical videos, discussions of the current affairs in your Telegram app. Subscribe to NewsClick's Telegram channel & get Real-Time updates on stories, as they get published on our website.