High Profile Area at the

Computer scientists at the university have developed a neuro-speech prosthesis. This allows imagined speech to be made acoustically audible. Illustration: CSL / University of Bremen
Computer scientists at the university have developed a neuro-speech prosthesis. This allows imagined speech to be made acoustically audible. Illustration: CSL / University of Bremen

Groundbreaking research success: Speaking by imagining

Major research successes require international collaboration: the Cognitive Systems Lab (CSL) at the University of Bremen, the Department of Neurosurgery at Maastricht University in the Netherlands and the ASPEN Lab at Virginia Commonwealth University (USA) have been working on a neuro-speech prosthesis for several years. The aim: to translate speech-related neuronal processes in the brain directly into audible speech. This goal has now been achieved: "We have managed to make our test subjects hear themselves speak, even though they are only imagining speaking," says Professor Tanja Schultz, head of the CSL. "The brainwave signals of volunteers who imagine they are speaking are converted directly into an audible output by our neuro-speech prosthesis - in real time and without any perceptible delay!" The sensational research result has now been published in the prestigious scientific journal "Nature Communications Biology".

The innovative neuro-speech prosthesis is based on a closed-loop system that combines technologies from modern speech synthesis with brain-computer interfaces. This system was developed by Miguel Angrick at the CSL. It receives the neuronal signals of users who imagine they are speaking as input. It transforms these into speech almost simultaneously using machine learning methods and outputs them audibly as feedback to the user. "This closes the circle for them between imagining speaking and hearing their speech," says Angrick.

 

Study with volunteer epilepsy patient

The work published in "Nature Communications Biology" is based on a study with a volunteer epilepsy patient who was implanted with depth electrodes for medical examinations and was in hospital for clinical monitoring. In the first step, the patient read texts from which the closed-loop system learned the correspondence between speech and neuronal activity using machine learning methods. "In the second step, this learning process was repeated with whispered and imagined speech," explains Miguel Angrick. "The closed-loop system generated synthesized speech. Although the system had learned the correspondences exclusively on audible speech, an audible output is also generated for whispered and imagined speech." This leads to the conclusion that the underlying language processes in the brain for audibly produced speech are comparable to those for whispered and imagined speech.

 

Important role of the Bremen Cognitive Systems Lab

"Speech neuroprosthetics aims to provide a natural communication channel for people who are unable to speak due to physical or neurological impairments," says Professor Tanja Schultz, explaining the background to the intensive research activities in this field, in which the Cognitive Systems Lab at the University of Bremen plays a globally recognized role. "The real-time synthesis of acoustic speech directly from measured neuronal activity could enable natural conversations and significantly improve the quality of life of people whose communication options are severely limited."

 

The groundbreaking innovation is the result of a long-term collaboration funded jointly by the German Federal Ministry of Education and Research (BMBF ) and the US National Science Foundation (NSF) as part of the "Multilateral Cooperation in Computational Neuroscience" research program. This collaboration with Professor Dean Krusienski (ASPEN Lab, Virginia Commonwealth University) was established together with former CSL employee Dr. Christian Herff as part of the successful RESPONSE (REvealing SPONtaneous Speech processes in Electrocorticography) project. It is currently being continued with CSL employee Miguel Angrick in the ADSPEED project (ADaptive Low-Latency SPEEch Decoding and synthesis using intracranial signals). Dr. Christian Herff is now an assistant professor at Maastricht University.

Link to the original publication: https://www.nature.com/articles/s42003-021-02578-0

Link (.pdf) to report on Medical-Design.News: https://www.medical-design.news/trends-innovation/medizintechnik/vorgestellte-sprache-akustisch-hoeren.190307.html

Further information:
www.uni-bremen.de/csl
www.uni-bremen.de

 

Questions answered:

Prof. Dr.-Ing. Tanja Schultz
University of Bremen / Cognitive Systems Lab
Department of Mathematics/Computer Science
Phone: +49 421 218-64270
E-mail: tanja.schultz@uni-bremen.de