Produced by the Office of Marketing and Communications
Research Focuses on How Brain Turns Sound Into Comprehensible Language
Photo illustration by Shutterstock
While hearing aids and other assistive devices have improved over the years, users can still experience an auditory muddle of competing noise. But imagine a device that could help its wearer tune in to just one person in a crowded room, while pushing unwanted sounds into the background.
To get there, University of Maryland researchers are working toward a better understanding of the process by which the brain turns hearing into comprehensible language, supported by a new five-year, $2.88 million grant from the National Institute on Deafness and Other Communication Disorders at the National Institutes of Health.
The study will bring researchers another step closer to mapping out the “neural pathway” that connects our ears to our brains, which only now is beginning to be studied in detail. Understanding the many linkages, stages and processes that derive understanding from sound is a key step to developing better hearing assistive devices, the researchers say.
Professor Jonathan Simon, who holds appointments in electrical and computer engineering (ECE), biology and the Institute for Systems Research (ISR), is leading the research. He’s working with colleagues Behtash Babadi, an associate professor in ECE and ISR, hearing and speech sciences (HESP) Associate Professor Samira Anderson and Stefanie Kuchinsky, a HESP affiliate researcher.
Individual links of this integrated processing chain have been studied in isolation, and while their roles are fairly well understood, how the whole system works to enable language comprehension is much less well understood.
“For many years, treatment of hearing difficulties has focused on the ear,” Anderson said, “but this approach does not consider the need for accurate representation of sound from ear to cortex to perceive and understand speech and other auditory signals. A better understanding of speech processing along the entire auditory pathway is a first step in developing more individualized treatment strategies for individuals with hearing loss.”
In the new project, sophisticated electroencephalography (EEG) and magnetoencephalography (MEG) scan studies of young listeners with normal hearing will focus on brain activity from the midbrain all the way up to the language areas.
“Using EEG and MEG to simultaneously measure both midbrain and cortical speech processing puts us at the cutting edge of the field,” Simon said. “Nobody else is doing that yet.”
Because listening effort may play a role in comprehension, the researchers will also use pupillometry, measuring eye dilation, to gauge how much effort participants are expending, as well as asking them to self-report.
“The results of this project will help validate pupillometry as an objective measure of listening effort by linking it to a well-studied set of neural systems,” Kuchinsky said. “Long term, we aim for this knowledge to improve our ability to both measure and mitigate the communication challenges people face in their daily lives.”
The researchers aim to uncover both the acoustic and neural conditions under which intelligible speech is perceived. An understanding of how speech processing progresses through a network path, and learning what compensating mechanisms the brain uses in poor hearing conditions should lead to principles that can be used to develop “brain-aware” and automatically tuning hearing assistive devices that use “brain activity as feedback in real time to enhance speech intelligibility,” Babadi said.
Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.
Faculty, staff and students receive the daily Maryland Today e-newsletter. To be added to the subscription list, sign up here:
Subscribe