- August 20, 2025
- By Karen Shih ’09
Yasmin Reyazuddin ’98 wants to be able to go out for a quick lunch like anyone else. But when nobody’s at the cash register, she’s forced to ask someone to read her the menu on the restaurant’s inaccessible self-service kiosk one by one, then help her order.
“It’s very difficult,” said Reyazuddin, who is blind. “If I can do it by myself, I want to do that.”
Beyond bakeries and coffee shops, everything from Social Security Administration offices to seatback entertainment on flights now feature touchscreens. But thanks to the efforts of University of Maryland researchers with a tech solution, her days of frustration could soon be in the past.
It’s just one of the ways the Maryland Initiative for Digital Accessibility (MIDA), led by Executive Director and Professor Jonathan Lazar of the College of Information and funded by the UMD Grand Challenges Grants Program, has brought together scientists across campus to transform existing and emerging tech for people with disabilities. More than 20% of the U.S. population is excluded from education, employment and health care due to inaccessible digital technologies and content, according to MIDA.
UMD researchers have made at-home COVID nose swab tests less complex, developed tools to flag seizure-inducing photosensitive content and created technology to add captions to videos. Their ultimate goal is design that is “born accessible,” so apps, user interfaces and websites don’t have to be fixed after they’re released. “Many of these accessibility-related features can help you, whether or not you have a disability,” Lazar said.
Here are four ways UMD researchers are transforming the technology landscape for everyone:
Creating Better Kiosks
The robot revolution might not be here yet—but screens are definitely taking over as labor costs rise and consumers get used to tap-tapping away.
The proliferation of kiosks comes at a cost, said Associate Research Engineer J. Bern Jordan. Earlier tools, like accessible keyboards, to help people with visual impairment navigate touchscreens for banking or transportation no longer always work as the types of interfaces proliferate, ranging from small tablets to huge vertical TVs.
Now, Jordan is testing ways to incorporate functions that people are familiar with on smartphones into these devices, part of five-year, $4.6 million grant from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) in the U.S. Department of Health and Human Services. The goal is a mode that’s easily activated, possibly marked with a tactile symbol, that would give self-service screen users the ability to double-tap to activate a button, flick right or left to scroll through options, and have text read out loud. He’s currently prototyping options and getting user feedback from blind individuals and experts.
It's a promising start, said Reyazuddin, who tested a version at a National Federation for the Blind convention in early 2025. Eventually, these tools could also help people who speak English as a second language, who have learning disabilities or who have limited mobility access menu options more easily, Jordan said.
That’s important because “disability is a minority anyone can join any day,” said Reyazuddin.
“We need to make the world accessible for everyone.”
Accessible PDFs From Scratch
From legal contracts to health forms, it’s impossible to escape PDFs in our professional and private lives. But unlike other types of documents, they can be difficult to search and navigate—and the challenge is even greater for people who use screen readers or other assistive tech.
Lazar’s graduate students have for years worked with Adobe, which has an office in UMD’s Discovery District, on the best ways to remediate and repair these documents. Now, second-year doctoral student Abhinav Kannan is taking a new approach: building accessibility into documents from the moment they’re exported.
“An enormous chunk of the web is inaccessible,” said Kannan. “There are trillions of PDF documents and millions being generated daily. … We need to stop the bleeding.”
He’s developing easy prompts and tools which may be incorporated into Adobe products to make sure PDFs have proper headings and tagged lists, sufficient color contrast, coherent alternate text for images and more from the beginning, with the goal of automating these elements in the future. Fully accessible documents will be a boon to everyone; they can be easily scanned and summarized by AI, helping people better understand long, complicated text and easily find images or terms, and can be picked up by web searches.
Leveraging AI for Safety
Self-driving cars, which have already hit the road in cities from Austin, Texas, to San Francisco, might seem like the inevitable next step in personal transportation. But safety remains a major concern, including how autonomous vehicles can navigate around pedestrians.
Associate Professor Hernisa Kacorri is collaborating with Assistant Professor Eshed Ohn-Bar at Boston University to collect data on how blind people walk, since they may move differently than current machine learning models anticipate. “Blind pedestrians can have unique patterns,” she said, as they use canes or guide dogs and stop and start in unexpected ways. (At the Tokyo Paralympics, for example, one of Toyota’s autonomous vehicles collided with a blind athlete, causing minor injuries.) The dataset is called BlindWays.
Kacorri, whose work is also part of the NIDILRR grant, said this type of data collection could be useful for other populations, like older adults and children, who may move in unpredictable ways for models trained on younger adults.
Teaching and Personalizing AI
Drop your favorite pen, and for most people, all it might take is a quick visual scan before you grab it from across the carpet. For people who are blind, a new tool could mimic that action.
Last year, Microsoft added the “Find My Things” function to its Seeing AI app, building on work that Kacorri and collaborators had done with blind users and teachable object recognizers. “As academics, we can’t always build the products,” she said. “The goal is for the industry to adopt the solutions we generate.”
While the app is designed for everyday tasks, like reading labels at a grocery store or counting currency, the new function gives users the ability to identify and find important personal objects. They can train the app, using the phone camera, to remember items like keys, backpacks or canes with just a quick snap of a room, then lead the user toward the object with beeps.
AI, including generative AI, is full of possibilities for the disability community, Kacorri said. “It has the potential to eliminate a lot of gaps … but we have to have a foot on the ground and look out for new barriers being created.”
UMD Research Changes Lives
At the University of Maryland, scientists and scholars come together to spark new ideas, pursue important discoveries and tackle humanity's grand challenges—improving lives in our communities and across the globe. See more examples of how UMD research changes lives at today.umd.edu/topic/research-impact.
Topics
ResearchUnits
College of Information