April 29, 2025

Brain-computer interface restores natural speech after paralysis

At a Glance

  • Researchers developed a brain-computer interface that quickly translates brain activity into audible words.
  • Such devices could allow people who’ve lost the ability to speak from paralysis or disease to engage in natural conversations.
Researcher plugging a cable into a device on a person’s head. Researchers connect the participant’s brain implant to the voice synthesizer computer. Noah Berger

Brain injury from conditions like stroke can cause paralysis, including loss of the ability to speak. Scientists have been developing brain-computer interfaces, or BCIs, that can translate brain activity into written or audible words to restore communication. But earlier devices had a notable delay between a person thinking what they wanted to say and the computer delivering the words. Even brief time lags can disrupt the flow of a conversation, leaving people feeling frustrated or isolated.

An NIH-funded team led by Dr. Edward F. Chang at the University of California, San Francisco, and Dr. Gopala Anumanchipalli at the University of California, Berkeley, set out to develop an improved brain-to-voice neuroprosthesis. The ideal device would stream audible speech without delay while a person silently attempted to speak.

The researchers implanted in a 47-year-old woman with paralysis an array of electrodes over the brain area where speech is encoded. She hadn’t been able to speak or make any vocal sounds for 18 years following a stroke. The team then used a deep learning system they designed to translate the woman’s thoughts into spoken words. Results appeared in Nature Neuroscience on March 31, 2025.

To train the system, the team recorded the woman’s brain activity as she silently attempted to speak a series of sentences. The sentences included more than 1,000 different words taken from social media and movie transcripts. Altogether, she made more than 23,000 silent attempts to speak more than 12,000 sentences.

A streaming brain-to-voice neuroprosthesis to restore naturalistic communication.

A streaming brain-to-voice neuroprosthesis to restore naturalistic communication. Berkeley Engineering

The system was trained to decode words and turn them into speech in increments of 80 milliseconds (0.08 seconds). For comparison, people speak about three words per second, or around 130 words per minute. The system then delivered audible words using the woman’s voice, which was captured from a recording made before the stroke.

The system was able to decode the full vocabulary set at a rate of 47.5 words per minute. It could decode a simpler set of 50 words even more rapidly, at 90.9 words per minute. That’s much faster than an earlier device the researchers had developed, which decoded about 15 words per minute with a 50-word vocabulary. The new device had a more than 99% success rate in decoding and synthesizing speech in less than 80 milliseconds. It took less than a quarter of a second to translate speech-related brain activity into audible speech.

The researchers found that the system wasn’t limited to trained words or sentences. It could make out novel words and decode new sentences to produce fluent speech. The device could also produce speech indefinitely without interruption.

“Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” Anumanchipalli says. “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.”

“This new technology has tremendous potential for improving quality of life for people living with severe paralysis affecting speech,” Chang says. “It is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future.”

The results show that such devices can allow those unable to speak to join in more natural conversation again. But further study is needed to test the device in more people. The researchers also want to continue improving the system. For example, they want to allow changes in tone, pitch, and volume to produce speech reflecting a person’s emotional state. 

—by Kendall K. Morgan, Ph.D.

Related Links

References: A streaming brain-to-voice neuroprosthesis to restore naturalistic communication. Littlejohn KT, Cho CJ, Liu JR, Silva AB, Yu B, Anderson VR, Kurtz-Miott CM, Brosler S, Kashyap AP, Hallinan IP, Shah A, Tu-Chan A, Ganguly K, Moses DA, Chang EF, Anumanchipalli GK. Nat Neurosci. 2025 Apr;28(4):902-912. doi: 10.1038/s41593-025-01905-6. Epub 2025 Mar 31. PMID: 40164740.

Funding: NIH’s National Institute on Deafness and Other Communication Disorders (NIDCD); Japan Science and Technology Agency’s Moonshot Research and Development Program; Joan and Sandy Weill Foundation; Susan and Bill Oberndorf; Ron Conway; Graham and Christina Spencer and the William K. Bowes Jr. Foundation; UC Noyce Initiative; Rose Hills Innovator program; Google Research Scholar Award; National Science Foundation; BAIR.