Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Johns Hopkins University |
| Country | United States |
| Start Date | Jul 01, 2025 |
| End Date | Jun 30, 2028 |
| Duration | 1,095 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2444353 |
Every day, our brains seamlessly pick out meaningful information from the cacophony of sounds that enters our ears. This ability to make sense of our acoustic surroundings relies on the brain being able to recognize patterns and predict what might come next. However, in real-world environments, sounds are rarely perfectly predictable, raising the question: how does the brain track information when patterns are uncertain or incomplete?
This research seeks to understand how the brain extracts and organizes auditory information from complex, dynamic soundscapes. The work has broad implications for real-world applications, from improved audio technologies and assistive listening devices to shedding light on auditory processing challenges in specific clinical populations.
The project explores how the brain infers statistical structure from sound sequences, focusing on the way it builds an internal model of the world to interpret and predict auditory experiences. While past research has shown that the brain can recognize predictable patterns in sound, real-world listening often involves incomplete or stochastic sounds, making it unclear how this predictive process functions in natural environments.
The investigator uses a combination of computational modeling, behavioral testing, and neurophysiological (EEG) experiments to study how the brain tracks complex statistical relationships and deeper temporal and multi-feature dependencies in sound sequences to guide perception. Individual differences, such as working memory capacity and perceptual sensitivity, are also evaluated for their effect on statistical tracking abilities.
The project is structured around two key aims: (1) developing a theoretical framework for statistical inference in auditory processing, supported by behavioral and EEG experiments on structured sound sequences, and (2) examining how the brain integrates statistical information across multiple perceptual features. By building a comprehensive model of predictive coding in the auditory system, this research will contribute to a deeper understanding of how the brain processes real-world acoustic scenes and inform studies on auditory object perception, sensory integration, and cognitive differences in auditory inference.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Johns Hopkins University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant