Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now
Research

Autonomous Cars Don’t Understand How Blind People Move Around. A Research Team Is Trying to Boost Safety.

Instead of Actors, Members of Visually Impaired Community Demonstrate Pedestrian Scenarios for New Dataset

By Laurie Robinson

A guide dog with a person holding a white cane

UMD College of Information researchers have teamed up with a Boston University engineer to create BlindWays, a dataset that aims to help boost safety for blind individuals as use of autonomous vehicles rises.

Photo by iStock

At the Paralympic Games in Tokyo in 2021, one of the autonomous Toyota vehicles designed to ferry people around the two-week event collided with a blind athlete at an intersection in the Olympic Village, causing minor injuries. How did the vehicle fail in its No. 1 job of not running people down?

University of Maryland researchers say the incident highlights a serious gap in autonomous vehicle development: a lack of solid data in how blind pedestrians navigate streets and sidewalks. Now College of Information researchers have teamed up with a Boston University engineer to create BlindWays, a dataset with real-world 3D motion-capture data and detailed descriptions of how blind individuals get around.

“We realized that most datasets and models for understanding human movement only include sighted people,” said Associate Professor Hernisa Kacorri, who also has an appointment in the University of Maryland Institute for Advanced Computer Studies. Kacorri has partnered on the project with Boston University College of Engineering Assistant Professor Eshed Ohn-Bar.

The oversight can hinder the ability of autonomous vehicles to safely predict the movements of blind pedestrians, whose behaviors, such as using a cane to feel the curbs or veering, might confuse current models, leading to potentially dangerous errors. “Getting these predictions wrong can be a matter of life and death,” she said.

Traditional motion datasets are often collected in controlled indoor environments, where actors reenact movements. These setups, however, do not accurately mimic real-life human motion. To ensure the authenticity of BlindWays, researchers employed a wearable motion capture system with 18 sensors to track body and mobility aid movements. “For BlindWays, we wanted the data to be as realistic and natural as possible,” says Ohn-Bar.

Researchers collaborated with people in the blind community to ensure routes used for the study accurately captured what blind pedestrians encounter in an urban setting. They designed eight urban routes with real-world challenges such as stairs, uneven pavement and busy sidewalks.

Blind participants navigated these routes with canes or guide dogs. In addition to the 3D motion data, the researchers collected detailed written descriptions of how participants moved and interacted with their environment, and their navigation aids.

We had a team of annotators, including experts in biomechanics, sensorimotor studies and mobility research, create detailed textual descriptions for each motion in the dataset,” said Kacorri. “These descriptions capture the finer details of how blind participants navigate, like how they use their cane to handle obstacles, their goals or how confident they are in different situations.”

The descriptions are also crucial for training models that combine language and motion, she said. By tweaking the text input, we can test if the models can accurately simulate realistic motion scenarios for blind pedestrians.

The results of their research so far are encouraging, reducing prediction errors by over 80% in some cases and highlighting the importance of representative data, said Ohn-Bar. Challenges remain, however, especially in high-stakes scenarios like crossing or turning, where errors are still too frequent.

To enhance and expand the BlindWays dataset, the researchers plan to collaborate with organizations specializing in disability rights, mobility training and urban planning. These partnerships aim to diversify participants, locations and scenarios in the dataset.

Capturing this diversity of groups and how they move is vital to making systems like self-driving cars, delivery robots and assistive tools break down rather than reinforce existing social and physical barriers, she said.

“BlindWays is just the beginning,” said Kacorri. “AI models can act unpredictably when coming across wheelchair users, people with motor impairments, or those who are neurodivergent. These groups face a higher risk in traffic accidents and are often excluded from current datasets.”

Topics:

Research

Schools & Departments:

College of Information

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.