Skip site navigation
Maryland Today
Athletics Arts & Culture Campus & Community People Research
Athletics Arts & Culture Campus & Community People Research

New UMD-FDA Collaboration to Advance Evaluation of AI-Enabled Medical Devices

Artificial intelligence (AI)-enabled medical devices are unlocking new opportunities for care—from software that helps interpret medical images to algorithms that analyze ECGs for potentially fatal heart arrhythmias—but also raising regulatory questions about how to evaluate safety and performance.

To better understand both the potential and the risk, the University of Maryland’s College of Computer, Mathematical, and Natural Sciences (CMNS) is launching a new research collaboration with the U.S. Food and Drug Administration’s Center for Devices and Radiological Health; it aims to develop new methods for assessing the reliability of AI- and machine learning-enabled medical devices before and after deployment.

“This new partnership reflects our commitment to solving grand challenges and maximizing the real-world impact of our research,” said Jennifer King Rice, senior vice president and provost at UMD. “By working together with the FDA at the forefront of regulatory science, our faculty will help new AI technologies reach patients with the safety, performance and trust they deserve.”

Researchers at the University of Maryland Institute for Health Computing (UM-IHC) in North Bethesda, Maryland, will benefit by partnering with the FDA on joint projects aimed at translating research into tools and frameworks that can support regulatory decision-making.

“Our goals are to develop techniques to assess the safety and effectiveness of new AI-enabled medical devices and to monitor performance after deployment,” said the collaboration’s UMD principal investigator Amitabh Varshney, a professor of computer science and dean of CMNS.

Adam Porter, co-executive director of UM-IHC and a professor of computer science at UMD, said, “AI-enabled medical devices are transforming the availability, accuracy and timeliness of medical information, and this collaboration aligns with our focus at the IHC on advancing evidence, methods and standards that keep pace with innovation.” 

One major facet of the collaboration is developing ways to measure and communicate how AI systems reach their results. The team will study quantitative “explainability” measures designed to help determine whether model outputs are clinically meaningful and reliable. 

Another focus is performance change over time, such as model or data drift, which can occur when real-world data differs from the data used to train a model. Over time, this can degrade the device's accuracy and reliability, but there are currently no standardized tools or datasets for monitoring AI systems after deployment.

The team will also develop tools and metrics to support the evaluation of AI-enabled extended reality (XR) devices for clinical and health care settings, where some systems can be limited by cybersickness during prolonged use. The researchers plan to refine and further validate an existing FDA-cleared predictive algorithm for cybersickness risk.