Mind implants enable a paralyzed man with extreme speech loss to “communicate” once more – CNET

93 0

UCSF’s brain-computer interface is surgically applied directly to a patient’s motor cortex to enable communication.

Ken Probst, UCSF

Facebook’s work in neural input technology for AR and VR appears to be moving in a stronger wrist direction, but the company continues to invest in research into implanted brain-computer interfaces. The latest phase of a year-long, Facebook-funded study by UCSF, called Project Steno, translates conversation attempts by a speech-impaired, paralyzed patient into words on a screen.

“This is the first time someone who just tries to say words naturally can be deciphered into words by brain activity alone,” said Dr. David Moses, lead author of a study published Wednesday in the New England Journal of Medicine. “Hopefully this is the proof of the principle for direct voice control of a communication device by using intentional speech attempts as a control signal from someone who cannot speak, who is paralyzed.”

From the laboratory to your inbox. Receive the latest scientific stories from CNET every week.

Brain-computer interfaces (BCIs) have been behind a number of promising recent breakthroughs, including Stanford research that could turn imaginary handwriting into projected text. The UCSF study takes a different approach, analyzes actual speech attempts and behaves almost like a translator.

The study, conducted by UCSF neurosurgeon Dr. Edward Chang involved the implantation of a “neuroprosthesis” of electrodes in a paralyzed man who had a brainstem stroke at the age of 20, the man attempted to answer questions displayed on a screen. UCSF’s machine learning algorithms can recognize 50 words and convert them into real-time sentences. For example, if the patient sees a prompt that asks “How are you today?” the answer appeared on the screen as “I’m very good” and came up word for word.

Moses made it clear that work should continue beyond Facebook’s funding phase and that the research still has a lot of work to do. At the moment it is unclear how much of speech recognition comes from recorded patterns of brain activity or vocal utterances, or a combination of both.

Moses quickly makes it clear that the study, like other BCI work, is not mind reading: it is based on the perception of brain activities that occur specifically when trying to engage in certain behavior such as speaking. Moses also says that the work of the UCSF team does not yet translate into non-invasive neural interfaces. Elon Musk’s Neuralink promises wireless transmission data from brain-implanted electrodes for future research and assistance purposes, but so far this technology has only been demonstrated on one monkey.

frlr-head-mounted-bci-research-prototype

The prototype of the head-worn BCI device from Facebook Reality Labs, which had no implanted electrodes, goes open source.

Facebook

Meanwhile, Facebook Reality Labs Research has moved away from the head-worn brain-computer interfaces for future VR / AR headsets and has focused on wrist-worn devices based on the technology acquired from CTRL-Labs for the near future. Facebook Reality Labs had its own non-invasive prototype of head-worn research headsets for studying brain activity, and the company has announced that it will be making these available for open source research as it no longer focuses on head-mounted neural hardware . (UCSF receives funding from Facebook, but no hardware.)

“Aspects of working with optical headgear will apply to our wrist EMG research. We will continue to use the optical BCI as a research tool to develop better wrist-based sensor models and algorithms. While we continue to research these prototypes, we are no longer developing a head-mounted optical BCI device to capture speech production. This is one reason why we will be sharing our head-mounted hardware prototypes with other researchers who can apply our innovation to other use cases, “a Facebook representative confirmed via email.

However, consumer-centric neural input technology is still in its infancy. While there are consumer devices that use non-invasive head or wrist sensors, they are currently far less accurate than implanted electrodes.

Leave a Reply