Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now
Research

Stop—Hey, What’s That Sound?

New Maryland Study Explores How the Brain Interprets Sounds as Words

By Rebecca Copeland

Sound illustration

Illustration by iStock

Illustration by iStock

You’re walking along a busy city street. All around are the sounds of subway trains, traffic and music wafting from storefronts. Suddenly, you realize you’re listening to a voice amidst the cacophony—and even understand what’s being said.

Though an everyday occurrence, it hasn’t been clear how we do this. But now researchers at the University of Maryland are learning more about the brain’s response when it picks up on spoken language.

Neuroscientists already understand that our brains react differently to understandable language than to non-speech sounds or unknown languages—shifting quickly to pay attention and process the sounds into comprehensible meaning. In a new paper published in the journal Current Biology, Maryland researchers were able to see where in the brain, and how quickly—in milliseconds—this actually happens.

“When we listen to someone talking, the change in our brain’s processing from not caring what kind of sound it is to recognizing it as a word happens surprisingly early,” said Professor Jonathan Z. Simon, who has appointments in biology, electrical and computer engineering and the Institute for Systems Research (ISR). “In fact, this happens pretty much as soon as the linguistic information becomes available.”

When engaged in speech perception, the brain’s auditory cortex analyzes complex acoustic patterns to detect words that carry a linguistic message. Its efficiency at this task seems to stem partly from an ability to anticipate; by learning what sounds signal language most frequently, the brain can predict what may come next.

In the Maryland study, the researchers mapped and analyzed participants’ brain activity while listening to a single talker telling a story. They used magnetoencephalography (MEG), a non-invasive neuroimaging method that records the naturally occurring magnetic fields produced by the brain.

The study showed that the brain quickly recognizes the phonetic sounds that make up syllables, and transitions from processing merely acoustic to linguistic information in a highly specialized and automated way, doing so within about one-tenth of a second.

“We usually think that what the brain processes this early must be only at the level of sound, without regard for language,” said lead author Christian Broadbeck, an ISR postdoctoral researcher. “But if the brain can take knowledge of language into account right away, it would actually process sound more accurately. In our study we see that the brain takes advantage of language processing at the very earliest stage it can.”

Psychiatry Professor L. Elliot Hong of the University of Maryland School of Medicine was the paper’s third author.

In another part of the study, participants were instructed to listen to one of two speakers in a noisy “cocktail party” scenario. Researchers found that the participants’ brains consistently processed language only for the conversation to which they were told to pay attention, not the one they were ignoring. This could reveal a “bottleneck” in our brains’ speech perception, with a limit on how much incoming information we can process, Broadbeck says.

Topics:

Research

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.