A Brain Implant Using AI Converts Thoughts Into Speech Nearly in Real Time
A groundbreaking brain implant that utilizes artificial intelligence (AI) has successfully transformed the thoughts of a paralyzed woman into spoken words almost instantaneously, according to recent reports from researchers in the United States.
This innovative technique, which has only been tested on a single individual so far, offers a promising outlook for individuals who have completely lost their ability to communicate. Researchers believe that similar technology could help many regain their voices.
The team from California employed a brain-computer interface (BCI) to decode the thoughts of Ann, a 47-year-old woman who has been quadriplegic and unable to speak for 18 years following a stroke. They were able to translate her silent thoughts into audible speech.
Previously, Ann experienced an eight-second delay between thinking and having the computer read her thoughts aloud. This delay hindered her from engaging in fluid conversations as a high school math teacher.
During the study, the research team recorded Ann's brain activity with electrodes as she silently articulated phrases. They created a synthesizer using recordings of her voice before her injury, allowing them to produce sounds that resembled her actual speech. They then trained an AI model to translate her neural activity into sound units.
This new system operates similarly to existing technologies used for live transcription of meetings or phone calls, explained Gopala Anumanchipalli from the University of California, Berkeley.
The implant is positioned in the area of the brain responsible for speech, enabling it to listen in and convert incoming signals into speech fragments that can be formed into sentences. According to Anumanchipalli, this transmission method sends segments of speech lasting 80 milliseconds (roughly half a syllable) to a recording device.
"Our new real-time approach is capable of converting brain signals into Ann's personalized voice almost immediately – in less than a second from the moment she attempts to speak," Anumanchipalli told Agence France-Presse (AFP).
Ann's ultimate goal is to become a university counselor. Although this aspiration is still a long way off, this breakthrough brings us closer to significantly enhancing the quality of life for those with vocal paralysis.
Listening to Her Own Voice
During the research process, Ann was able to see sentences displayed on a screen, such as "So you love me," which she silently recited in her mind.
These brain signals were rapidly converted into the voice that the researchers reconstructed from her pre-injury recordings. Ann expressed great emotion while hearing her voice and reported a sense of embodiment, as noted by Anumanchipalli.
The AI model employs a deep learning algorithm, which was trained using thousands of phrases Ann attempted to say silently. While the model is not perfectly accurate, its current vocabulary consists of 1,024 words.
Patrick Degenaar, an expert in neuroprosthetics from Newcastle University in the UK who was not involved in the study, highlighted that this research represents a preliminary exploration into the effectiveness of the method. However, he found it to be impressive.
Degenaar noted that this system uses electrodes that do not penetrate the brain, unlike the BCI utilized by Neuralink, led by Elon Musk. The procedure for placing these electrodes is relatively common in hospitals where epilepsy is diagnosed, indicating that this technology could potentially be scaled up easily.
implant, AI, speech, technology, research