Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now
Research

New AI Seed Grants Support Trustworthy Tech

New Projects Target Health Care, Autonomous Cars, Educational Disparities, More

By Tom Ventsias

woman does trust fall into AI figure

A UMD-led institute that focuses on trustworthy AI development has announced a new round of seed grants for a variety of research projects intended to benefit society.

Illustration by Adobe Stock

The University of Maryland-led Institute for Trustworthy AI in Law & Society (TRAILS) on Tuesday announced a second round of seed funding, jumpstarting a series of interdisciplinary projects to advance artificial intelligence (AI) systems that benefit all of society.

The five grants totaling $685,000 will support efforts to improve AI-generated health information, enhance safety and trust in autonomous vehicles, address education disparities driven by race and location, examine AI-generated social media used during a pandemic or natural disaster, and build new frameworks for using chatbots and their underlying technology, known as large language models (LLMs), in academia.

The projects include faculty and students in all four of TRAILS’ primary academic institutions: UMD, George Washington University (GW), Morgan State University and Cornell University.

“This latest round of seed funding supports research and innovation that can have a direct impact on how people stay healthy, learn and travel—areas of our lives that will benefit immensely from AI systems that are more ethical, inclusive, trustworthy and efficient,” said Hal Daumé III, a UMD professor of computer science who is the director of TRAILS. 

Like the inaugural round of TRAILS funding unveiled in January, the latest projects were selected based on their connection to the institute’s core values—developing trustworthy AI algorithms, empowering users to make sense of AI systems, training the next generation of AI leaders, and promoting inclusive AI governance strategies.

The new grantees will work alongside previously funded seed grant teams, learning from and supporting each other while collectively contributing to TRAILS’ shared body of knowledge, said Daumé, who is also the director of the recently launched Artificial Intelligence Interdisciplinary Institute at Maryland (AIM). Focused on responsible and ethical AI technology, it builds upon the university’s existing AI expertise, research and centers, including TRAILS.

The TRAILS Institute launched in May 2023 with a $20 million award from the National Science Foundation and the National Institute of Standards and Technology (NIST). Since then, faculty, students and postdocs affiliated with TRAILS have coordinated AI workshops and seminars on Capitol Hill, hosted a summer academy to empower future AI innovators, partnered with an immersive language museum to explore the use and efficacy of machine translation software, and much more.

“We continue to push forward in our second year, making new connections and working with diverse stakeholders whose voices previously went unheard as AI systems were designed, developed and deployed,” said Darren Cambridge, the managing director of TRAILS. “We’re listening and greatly value multiple viewpoints as we work toward building the next generation of AI tools and technologies.”

The five new projects each receiving from $115,000 to $150,000 each are:

Social Media During Crises
Giovanni Luca Ciampaglia from UMD and Erica Gralla and David Broniatowski from GW are investigating the trustworthiness of AI-driven social media platforms used during crisis situations like a natural disaster or pandemic. They will examine the interplay between two key elements: the AI-based algorithms that dictate the content users see on the platforms and the architectural frameworks that govern user interactions. The researchers intend to develop a simulation model that evaluates how, in a crisis context, different classes of social media platforms handle the spread of vital information while preventing the propagation of harmful content.

Autonomous Vehicle Safety
Peng Wei from GW and Furong Huang from UMD will collaborate with a Federal Highway Administration lab to develop deep reinforcement learning algorithms—AI that makes decisions based on repeated trial and error scenarios—for safer operation of autonomous vehicles. They plan to design robust reinforcement algorithms that are adaptable to multiple traffic conditions and driver behavior scenarios. They expect their multimodal AI framework, which combines both language and visualization, to increase the level of trust from human drivers and passengers.

AI Educational Support Beyond Math and Reading
Martha James, Victoria Van Tassell, Valerie Riggs and Naja Mack from Morgan State and Jing Liu and Wei Ai from UMD are addressing disparities in PK–12 education that relate to race and students’ home ZIP code. The researchers seek to expand teaching best practices focused on reading and math by modifying AI tools originally developed to support excellence in instruction in core subjects to encompass the “encore” content areas of vocal music, visual arts and physical education.

AI and Bad Health Advice
Valerie Reyna from Cornell and Broniatowski are investigating how people interpret health-related misinformation and disinformation from AI systems like ChatGPT. While the systems may seem like a source of health advice tailored to individuals, little is known of the psychological mechanisms that people apply to information from these AI systems, especially if it’s false or misleading. Using behavioral and computational methods that provide insight into the human decision-making process, the researchers will compare people’s level of trust in generative AI information to health-related information provided by humans.

AI in Academia
Ryan Watkins, David Lippert and Zoe Szajnfarber, all from GW, are developing a pragmatic planning guide and toolkit for student-faculty project teams that use LLMs like ChatGPT in a higher education setting. They will study development processes used by for projects such as a custom chatbot for a history course, for example, focusing on teams without a strong computer science background. The researchers will rely on established protocols in trustworthy AI, including those recently established by NIST, to help build a comprehensive toolkit aimed at enhancing the trustworthiness, security and openness of AI applications in academia.

Topics:

Research

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.