Skip site navigation
Maryland Today
Athletics Arts & Culture Campus & Community People Research
Athletics Arts & Culture Campus & Community People Research
Research

UMD Researchers Advance Robotics to Perform Complex Household Tasks

Technology From Nvidia Powers Next-Generation AI-Driven Robotic Systems

NVIDIA art 1920x1080

With support from technology leader Nvidia, University of Maryland researchers are integrating advanced AI with scalable computing infrastructure, which will allow robotic systems to easily accomplish complex household tasks. (Image courtesy of Furong Huang)

Robotic systems worked their way into factories decades ago, albeit tucked behind railings and orange safety paint. They’ve since made inroads in tightly controlled settings like operating rooms, and more recently in clearly delineated but less-predictable environments like roadways, as self-driving cars.

Now, researchers at the University of Maryland are advancing the frontiers of robotics by opening up one of the most chaotic environments of all: typical, cluttered homes. Their new initiative is designed to enable humanoid systems to perform complex, real-world household tasks with unprecedented autonomy and reliability. 

Built on Nvidia AI infrastructure through its Academic Grant Program, the project integrates breakthroughs in trustworthy machine learning, sequential decision-making and generative AI to create robotic systems that can reason, adapt and act in ever-changing domestic environments.

The effort is led by Furong Huang, an associate professor of computer science, and Tom Goldstein, a professor of computer science. Both hold appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS), which will install and maintain the new computing infrastructure in its high-performance data center.

The core of the project is the development of “foundation models” for robotics: general-purpose AI systems that unify perception, planning and control. They allow robots to transfer knowledge across tasks, environments and even different physical embodiments, a critical step toward building adaptable, general-purpose machines.

“By integrating advanced AI with scalable computing infrastructure, we aim to accelerate progress toward generalist household robots—systems that can adapt to new environments and tasks rather than rely on narrowly programmed behaviors,” Huang said.

Tasks such as tidying a messy room or preparing a simple meal require robots to interpret incomplete or ambiguous sensory input, track objects over extended time horizons and make context-aware decisions. Even routine activities like loading a dishwasher involve recognizing objects of varying shapes and materials, understanding spatial relationships and adjusting actions when conditions change.

To address these challenges, the Maryland team plan to develop HomeGraph, a framework that structures a robot’s understanding of its environment. HomeGraph will combine functional scene graphs—capturing spatial relationships such as “on,” “inside” and “next to”—with skill and tool graphs derived from motion trajectories and large-scale video demonstrations. This hybrid representation would enable robots to generate multistep plans, monitor execution and adapt in real time. If a robot encountered an unexpected obstacle or error, it could update its internal model and revise its strategy without restarting the task.

Large-scale simulation and synthetic data generation are also central to the project. Using the open robotics platform Nvidia Isaac, researchers can create photorealistic virtual home environments populated with diverse objects and layouts. These simulations allow robots to practice millions of task variations and safely test rare or complex scenarios. The resulting datasets are used to train foundation models that generalize more effectively to new, unseen environments.

The collaboration is further strengthened by industry expertise. Nvidia RTX PRO 6000 Blackwell GPUs for training large models and Nvidia Jetson AGX Thor developer kits for efficient deployment on physical robots bridge the gap between research and real-world applications.

Researchers are also exploring how generative AI techniques such as large language and vision-language models can enhance instruction following and human-robot interaction. By enabling users to issue natural language commands like “clean up the kitchen after dinner,” robots can translate high-level goals into executable action plans grounded in the HomeGraph framework.

Beyond household assistance, the implications of this work extend to elder care, rehabilitation and disaster response—domains where robots must operate in complex, unpredictable environments. The long-term vision is to develop versatile robotic assistants capable of seamlessly supporting everyday life.

AI at Maryland

The University of Maryland is shaping the future of artificial intelligence by forging solutions to the world’s most pressing issues through collaborative research, training the leaders of an AI-infused workforce and applying AI to strengthen our economy and communities.

Read more about how UMD embraces AI’s potential for the public good—without losing sight of the human values that power it.

Learn how Forward: The University of Maryland Campaign for the Fearless will accelerate our momentum in addressing the grand challenges of our time and changing life and lives.

Related Articles

Research

March 12, 2026
Researcher Helps Establish Possible DNA Link to ‘Mona Lisa’ Artist

Research

March 26, 2026
National Honor Recognizes Exceptional Scientific Achievements