Loading…

Loading grant details…

Completed CONTINUING GRANT National Science Foundation (US)

US-German Research Proposal: ADaptive low-latency SPEEch Decoding and synthesis using intracranial signals (ADSPEED)

$6.05M USD

Funder National Science Foundation (US)
Recipient Organization Virginia Commonwealth University
Country United States
Start Date Jan 01, 2021
End Date Dec 31, 2025
Duration 1,825 days
Number of Grantees 2
Roles Principal Investigator; Co-Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2011595
Grant Description

Recent research has demonstrated that it is possible to synthesize intelligible speech sounds directly from invasive measurements of brain activity. However, these approaches have a perceptible delay between brain activity and audible speech output, preventing a natural spoken communication. Furthermore, the approaches generally require pre-recorded speech and thus cannot be directly applied to people who are unable to speak and generate such recordings.

This project aims to develop methods for synthesizing speech from brain activity without perceptible processing delay that do not rely on pre-recorded speech from the user. The ultimate goal is to develop a system that restores natural spoken communication to the millions of people who suffer from severe speech disorders, including those with complete loss of speech.

The project is organized into three research thrusts. The first thrust focuses on asynchronous and acoustics-free model training, where novel surrogates to the user's vocalized speech will be created using approaches based on dynamic time warping and the inference of intended inner-speech acoustics from corresponding textual representations. The second thrust focuses on online validation and user adaptation, where the existing low-latency speech decoding and synthesis scheme, which is not inherently adaptable, will be validated in a closed-loop fashion using online human-subject experiments.

This will provide valuable insights into how the user responds and adapts to the artificial, synthesized speech output. The third thrust focuses on the development and testing of low-latency system-user co-adaptation schemes. Co-adaptation, where both the user and system adapt to optimize the synthesized output, is crucial for revealing the elusive representations of inner (i.e., imagined or attempted) speech in the absence of a reliable surrogate for modeling.

As a result, this research will simultaneously advance the understanding of the neural representations of inner speech and, in turn, co-adaptive inner speech decoding toward the development of practical closed-loop speech neuroprosthetics. A companion project is being funded by the Federal Ministry of Education and Research, Germany (BMBF).

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

Virginia Commonwealth University

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant