Produced by the Office of Marketing and Communications
Team Develops Theoretical Approach Rewarding Firms That Exceed Benchmarks on Privacy, Bias
Instead of regulators playing catch-up, AI developers could help create safer systems if market-based incentives were put in place, UMD researchers say.
Illustration by iStock
Tech companies are racing to build the most powerful artificial intelligence (AI) models, but amid intense competition, safety issues like user privacy and biased data often take a back seat to performance. Ramping up government regulation is one way to address these concerns, but regulators have struggled to remain abreast of the dizzying pace of AI development.
Now, a team of University of Maryland researchers is developing a system that—if implemented—would motivate tech companies to compete not only on capability, but to innovate on responsibility as well. Its proposed auction-based mechanism, the first of its kind, would incentivize firms to find new ways to increase AI safety.
“We realized that we need a market-driven regulatory framework, one that aligns safety with AI companies’ business goals,” said Furong Huang, an associate professor of computer science who is leading the UMD team. “Instead of fighting AI companies, we let market forces work for us.”
Here’s how it would work: Companies submit AI models to a regulator for approval along with a monetary bid, which represents the money they’ve spent on making their model comply with AI safety benchmarks. The regulator sets a minimum compliance threshold, but also rewards higher compliance levels. Consequently, instead of striving to just pass the bar, AI developers compete to exceed it.
The UMD team modeled its AI regulation plan as an “all-pay” auction, in which all firms vying for approval of an AI model would make a monetary bid that they would have to pay, regardless of whether they win the auction. The auction, meanwhile, compares competing models, and the one that is more compliant “wins.” The researchers’ analysis proved that AI developers will submit models that exceed compliance standards. The results show a 15% increase in participation rates in the regulatory process and a 20% rise in spending on compliance versus an approach that just sets minimum standards.
“For the first time, we prove that responsible AI can be incentivized mathematically,” said Huang, who has an appointment in the University of Maryland Institute for Advanced Computer Studies. “We believe this work will make safe AI a winning strategy in the AI race.”
To translate their theoretical approach to the real world, the team plans to consult with policy experts on required steps: First, a regulatory body must be set up at either the state or federal level, along with all the organizational and bureaucratic processes that go along with such a move. Second, methods to evaluate model safety must be developed. Research is currently underway on quantifying fairness and other key safety aspects of AI models, but they have yet to be codified or standardized.
Huang’s team of researchers includes Marco Bornstein, a Ph.D. student in applied mathematics; Zora Che, a Ph.D. student in computer science; Suhas Julapalli ’25, a computer science major; Abdirisak Mohamed, an adjunct lecturer in the College of Information; and Amrit Singh Bedi, a former UMD assistant research scientist who is now an assistant professor of computer science at the University of Central Florida.
Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.
Faculty, staff and students receive the daily Maryland Today e-newsletter. To be added to the subscription list, sign up here:
Subscribe