Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now
Research

Op/ed: Do We Need a National Algorithms Safety Board?

UMD Human-Computer Interaction Expert Says Faulty Machine Learning Algorithms Risk Safety, Threaten Bias

By Ben Shneiderman

hands typing code on computer

Faulty or biased algorithms can create safety problems and contribute to inequality in society, a UMD computer scientist writes in a new essay in The Hill.

Photo by Shutterstock

The twin crashes of Boeing 737 MAX airliners in 2019 and 2020 illustrate how bad algorithms—processes computers use to solve problems—can lead to deadly outcomes, writes computer science Professor Emeritus Ben Shneiderman in a new essay in The Hill.

Even if they don’t kill you, malfunctioning algorithms can cause you to lose a job, fail to qualify for a loan or be falsely accused of a crime—potentially because machine-learning artificial intelligence systems have developed a bias, racial or otherwise. Algorithms could also be intentionally deployed by terrorists or political oppressors to further their ends—suggesting national action is needed, writes Shneiderman, a pioneer in human-computer interaction.

In the United States, the National Transportation Safety Board is widely respected for its prompt responses to investigate plane, train and boat accidents. Its independent reports have done much to promote safety in civil aviation and beyond. Could a National Algorithms Safety Board have a similar impact in increasing safety for algorithmic systems, especially the rapidly proliferating artificial intelligence applications based on unpredictable machine learning? Alternatively, could agencies such as the Food and Drug Administration, Securities and Exchange Commission or Federal Communications Commission take on the task of increasing safety of algorithmic systems?

In addition to federal agencies, could the major accounting firms provide algorithmic audits as they do in auditing financial statements of publicly listed companies? Could insurance companies provide guidance for the tech community as they do for the construction industry in making safer buildings? We already see civil society and business groups such as Underwriters Laboratories developing a Digital Safety Research Institute to provide the evidence-based research for safer algorithmic systems.

Read the rest of the essay in The Hill.

Topics:

Research

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.