Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now
Research

These Researchers Want to Read Your Mind

New UMD Study Seeks to Decipher ‘Imagined Speech’ in the Brain

By Maggie Haslam

word bubbles coming out of head

A UMD neuroscientist is leading a study that aims to use brain imaging to decode "imagined speech" with the goal of helping nonverbal people communicate.

Illustration by Valerie Morgan

“Tell me what you’re really thinking” rarely gets a straightforward response, even from the most candid individual. But for those trying to reach someone who’s unable to communicate, even a hint would be welcome as they wonder if the person is in pain or needs something.

A new study into how we “imagine” speech by researchers at the University of Maryland may one day help reveal a person’s inner monologue by transforming the jagged blips of neuroactivity on a brain scan into words.

“This capability could be transformative for people who are fully active in the mind, but unable to communicate because of a physical impairment,” said electrical and computer engineering Professor Shihab Shamma, who is leading the research effort. “This is really like a window to the mind.”

His research team was one of six to receive funding through the Defense University Research Instrumentation Program (DURIP), which supports the instrumentation necessary for cutting-edge research. The nearly $300,000, one-year project, which began last week, will fund the equipment for intensive electroencephalography (EEG) recordings to decode “imagined speech”—the unspoken words and phrases we intentionally form in our minds.

For decades, Shamma, who has affiliations with the Institute for Systems Research and the Brain and Behavior Institute, has been a leader in the field of computational neuroscience with a focus on understanding the process of sound recognition and processing in the auditory systems of the brain. Models that he created are now the standard in the literature for how sound is processed.

The new research stems from studies by Shamma and his colleagues on what happens in the brain when people imagine music, like recalling a favorite song or post-concert orchestra piece. The team found that the signals generated when “imagining” a piece of music were closely related to when a person actively listened to it, and detailed enough to be used to identify the imagined notes.

“Then we discovered that, similar to music, if you imagine speech, you also get something that you could potentially decode and interpret,” said Shamma.

The new study will tether subjects to dozens of sensors as they listen to spoken words, such as an audiobook, while Shamma’s students record their neural activity. The exercise will generate three datasets: the speech being played, the text of that speech and the EEG recordings of subjects. This will all be shared with research collaborator and computer science Professor Ramani Duraiswami and another team of students to process using large language models and previously trained speech models to essentially “match” activity on the EEG readings with listened-to speech.

“The goal is that this data-driven approach will help us to eventually create a EEG speech recognition system that works similar to Siri,” said Duraiswami, who has a joint appointment in UMD’s Institute for Advanced Computer Studies (UMIACS).

The DURIP award will fund the equipment needed to collect large amounts of recorded brain activity in the lab rather than a hospital setting, as well as add a node—a network of high-performance computers—to the computational cluster in UMIACS that will employ machine learning to interpret the data.

A handful of researchers have been investigating how to detect and interpret imagined speech since 2008, when the Pentagon’s advanced tech wing—the Defense Advanced Research Projects Agency (DARPA)—funded a team at the University of California (Irvine) to investigate synthetic telepathy to allow nonverbal communication on the battlefield. Since then, researchers—including Shamma’s former student, Nima Mesgarani M.S. ’05, Ph.D. ’08, now at Columbia University—have explored the regions of the brain responsible for speech communication through invasive and noninvasive means.

Shamma emphasizes that nefarious mind-reading experiments ripped from the pages of science fiction aren’t the objective; this type of technology could lift the veil for individuals suffering from a variety of neurological disorders, such as ALS or nonverbal autism or those locked in vegetative states.

But both Shamma and Duraiswami say some science fiction isn’t entirely off-base. The ability to interpret imagined speech could allow one to “imagine” words onto a laptop screen, for instance, instead of manually typing them. The technology could conceivably be applied to memory and images: Picture the victim of a crime able to articulate repressed memories or conversations for law enforcement, or two people communicating without speaking at all.

“The results have been well beyond chance already,” said Duraiswami. “It almost makes us believe that it’s our job as researchers to dream up what to some seems impossible.”

Topics:

Research

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.