Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now

Why Your 2-Year-Old is Smarter Than a Supercomputer

(At Least at Learning Language)

By Karen Shih ’09

Letters

Photo by John T. Consoli

Photo by John T. Consoli

Your toddler will never set an alarm for you like Siri, give directions like a GPS, or play a movie as fast as you can say the title like Amazon Echo. But inside her still-developing skull is a brain that’s more complex than the world’s most powerful computers, giving your kid the ability to identify your voice across a busy room or understand the Southern drawls of her grandparents—situations even the latest technology still struggles with.

“Language is one of the things that makes us uniquely human,” says Rochelle Newman, chair of Maryland’s hearing and speech sciences department. “If you’re having difficulty understanding what people are saying, you end up being withdrawn, you don’t have as much social contact, and you don’t have as many opportunities to learn.”

She studies children ages 16 to 36 months, a crucial period for language development, to see how they cope with obstacles like noisy environments, accents and bilingualism.

“One thing we’ve been finding is that the kinds of noises that are more distracting to children are not the same as the ones that are more distracting to adults,” says Newman. “That’s problematic because an adult, if a situation is noisy, can do something about it, like turn off the TV or go into another room. Kids can’t really do that—they depend on us to make the environment quiet enough for them to learn in.”

While adults can easily distinguish between two voices in the same room, using gender and other cues, children have more trouble with that. So a parent might not notice the radio in the background, but a toddler might not be able to focus on the parents’ words if he can hear a breaking news report.

As for accents or other languages, she’s found that early exposure helps kids understand a variety of pronunciations for the same word. “Try and bring them places where they will hear people speaking differently,” Newman says. Human brains can generalize across examples in a way that computers currently can’t. For example, we might hear three different ways of saying “car;” by the fourth time we hear it with a new accent, we can make the leap and understand what the person means. But software still needs to be taught potentially thousands of pronunciations for every word—a gargantuan task.

BlocksUltimately, she says, “we still don’t know what it is in our brain that allows us to” process the nuances of language that we experience each day. But once we find out, that must mean we’ll be able to build better technology, right?

Not necessarily, says computer science Associate Professor Hal Daume III, a colleague in UMD’s interdisciplinary Language Science Center. “Airplanes don’t fly like birds and submarines don’t swim like fish. Designing a system that understands language by mimicking how humans understand language can lead to designs that don’t capitalize on the advantages that machines have over people, like really good memory. On the other hand, humans are exceptionally adept and learning language, and understanding what parts of language people pay attention to can help point us in promising directions.”

Until computer scientists and engineers can build technology that interacts seamlessly with us, Newman’s research has a more human focus: improving how we teach our children.

“Understanding language is critical if we want to intervene, solve inequities,” she says. “We need to change how we think about day cares and school settings… what kind of settings can we successfully hear in and learn from?”

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.