Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now
Research

$750K in Seed Grants Awarded by UMD-Led Coalition on Trustworthy AI

Teams Including UMD Researchers Among 7 Winners of Funding

By Tom Ventsias

Illustration of researchers and students working in different classrooms and labs, with TRAILS: Trustworthy AI in Law & Society logo in center

Led by the University of Maryland, TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation and the National Institute of Standards and Technology. The institute is focused on developing, building and modeling participatory research that—over time—will increase trust in AI.

Illustration courtesy of TRAILS

The Institute for Trustworthy AI in Law & Society (TRAILS), a coalition of four academic institutions led by the University of Maryland, has awarded over $750,000 in new seed funding for projects meant to transform the practice of AI from one that is driven solely by technological development to one that encourages tech innovation and competitiveness through cutting-edge science focused on human rights and human flourishing.

The seven seed grants announced on Tuesday—each between $50,000 to $150,000—went to faculty and students from UMD, George Washington University, Morgan State University and Cornell University.

The interdisciplinary projects will address topics that include ensuring the trustworthiness of public safety information that large language models (LLMs) extract during disasters, identifying instructional needs for youth and families interested in using AI, and helping a wide range of stakeholders engage more fully in the governance of AI.

The projects were chosen based on their potential to advance TRAILS’ four core research thrusts: participatory AI design, methods, sense-making and governance. They also enact the institute’s commitment to advancing scientific knowledge in concert with educating and empowering AI users, said Hal Daumé III, a professor of computer science at the University of Maryland and the director of TRAILS.

“As we continue to expand our impact and outreach, we’re aware of the need to align our technological expertise—which is quite robust—with new methodologies we’re developing that can help people and organizations realize the full potential of AI,” Daumé said. “If people don’t understand and see what they care about reflected in AI technology, they’re not going to trust it. And if they don’t trust it, they won’t want to use it.”

Daumé, in addition to his leadership of TRAILS, is the director of the Artificial Intelligence Interdisciplinary Institute at Maryland (AIM), which brings together AI experts across the UMD campus to focus on responsible, ethical development and use of the technology to advance public good in industry, government and society.

This third round of TRAILS seed funding is just a first step toward moving many of the newly funded initiatives forward, said David Broniatowski, a professor of engineering management and systems engineering at George Washington University. Broniatowski, who is the deputy director of TRAILS, says that ultimately, the TRAILS-sponsored research teams are expected to seek external funding or form new partnerships that will further grow their work.

He gave the example of a project selected during the first round of TRAILS seed funding in the fall of 2024, wherein researchers wanted to expand teaching best practices by modifying AI tools originally developed to support excellence in instruction in core subjects. That work led to a series of additional grants, culminating in a $4.5 million grant from the Gates Foundation/Walton Family Foundation to improve AI as a tool to strengthen math instruction and boost learning.

“We’re investing in the future of AI with this latest cohort of seed projects,” Broniatowski said. “These ambitious endeavors are strategically aligned, impact-driven initiatives that will advance the science underlying AI adoption, shaping future conversations on AI governance and trust.”

The seven projects announced this week to receive TRAILS seed funding are:

  • Adam Aviv and Jan Tolsdorf from GW and Michelle Mazurek from UMD are developing an auditing framework that lets people and organizations test context- and user-specific properties in large language models (LLMs) like ChatGPT. The team’s open-source technology is designed to broaden access to evaluation tools developed through prior TRAILS-supported research to assess trustworthiness in generative AI systems. The goals are to open new pathways for broader academic research and to encourage public participation in areas like cybersecurity “red teaming,” where benevolent hackers use LLMs to conduct non-destructive cyberattacks that can expose vulnerabilities.
  • Sheena Erete, Hawra Rabaan and Tamara Clegg from UMD and Afiya Fredericks from Morgan State are examining how youth and families think about AI. They’re identifying the technical and instructional needs required to build a growth-oriented, AI-infused learning environment worthy of trust. The TRAILS team will implement a study that engages youth, parents and educators to understand how communities define AI literacy and perceive AI technologies, and what types of infrastructure are needed to support sustained AI education.
  • Jordan Boyd-Graber and Mohit Iyyer from UMD are examining question-answering (QA) datasets and metrics, comparing human vs. computer capabilities as they relate to determining the trustworthiness of information gathered from searching large volumes of text. While multimodal QA datasets already exist, many are artificial or take shortcuts that prevent researchers from gaining accurate insights. The TRAILS team will address this void by collecting challenging multimodal visual QA examples online—which might include questions about images or videos featuring everyday items—presenting them to human users and using the results to diagnose the skills and weaknesses of current state of the art AI-driven multimodal models to further trustworthiness.
  • Lovely-Frances Domingo, Maria Isabel Magaña Arango, Sander Schulhoff and Daniel Greene, all from UMD, are exploring how a new form of public red teaming—where multiple participants identify and report vulnerabilities, biases and other safety issues in AI models—can enhance the trust and safety of generative AI systems like ChatGPT. The TRAILS team plans to recruit people for competition-style events that will offer data on how social and situational factors are influencing trust in generative AI, and how active participation with the technology influences the participants’ trust.
  • Zoe Szajnfarber from GW is exploring novel pathways to increase participation and responsibility across stakeholders involved in the governance of AI ecosystems. Many current governance frameworks focus on the AI model or data as the unit of analysis, while more mature domains enact a layered socio-technical system that targets different risks for different levels and actors. By mapping the safety ecosystems from a sample of these domains, Szajnfarber will use systems engineering to help evolve AI governance so that the technology can seamlessly perform important tasks like protecting human rights and removing barriers to innovation.
  • Valerie Reyna and Sarah Edelson from Cornell and Robert Brauneis from GW are examining the “science of substantial similarity” in AI-related copyright litigation. Because AI companies often justify repurposing copyrighted materials as fair use, copyright cases hinge on cognitive perceptions of substantial similarity and “transformativeness,” which refers to how significantly a new work changes or repurposes copyrighted material, adding new meaning or purpose, rather than merely copying or reproducing it. The TRAILS team will conduct a series of experiments, varying whether copyrighted works share gist-based (“total concept and feel”) versus verbatim-based (surface features) similarity in copyright case descriptions, and also varying explanations of computational transformation to potential jurors. They will then assess their perceptions of substantial similarity, transformativeness and trust, as well as individual differences that shape that perception.
  • Erica Gralla and Rebecca Hwa from GW are examining the use of LLMs in the high stakes setting of disaster recovery to understand how to measure and improve their trustworthiness. Using data from the Los Angeles wildfires earlier this year, the TRAILS team will examine how people decided to evacuate the danger zone based on multiple sources, including news media, social networks and government updates, together with the aid of LLMs.
Topics:

Research

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.