Skip site navigation
Maryland Today
Athletics Arts & Culture Campus & Community People Research
Athletics Arts & Culture Campus & Community People Research
Research

Can Household Robots Learn From Experience—and YouTube?

UMD Researcher Develops Data-Driven Methods That Mimic Human Behavior to Accelerate Reliable Domestic Assistants

Main artwork 1920x1080

Research by computer science Ph.D. student Seungjae Lee (below) will enable robots to learn not only from their own physical experiences, but also from the vast reservoir of human activity captured online. (Illustration by iStock)

One reason that robots can’t yet handle everyday household chores like washing dishes, folding laundry and ironing shirts is that they don’t know how to navigate the unpredictable, often messy environments of real homes.

At the University of Maryland, doctoral student Seungjae “Jay” Lee is developing new data-driven methods designed to bridge the gap between impressive laboratory demonstrations and dependable real-world performance of domestic tasks. His work centers on enabling robots to learn not only from their own physical experiences, but also from the vast reservoir of human activity captured in online videos.

Seungjae Lee works on laptop

“People often focus on designing better model architectures, but for artificial intelligence that integrates AI algorithms into physical systems, the real bottleneck is the dataset itself,” Lee said.

He points to what researchers describe as a “scarcity” problem in robotics. Unlike large language models that can learn from massive volumes of readily available internet text, physical robots require specialized data—tactile feedback, sensor readings and action trajectories—collected in real-world settings. Gathering this data is slow, expensive and technically demanding.

“If we can transfer knowledge from web-scale human data into robotics, we can overcome the scarcity problem,” Lee said.

A second-year Ph.D. student in computer science, Lee envisions household robots that can assist with routine tasks within the next five to 10 years. While robotic systems often perform well in controlled lab environments, their reliability frequently declines in real homes, where lighting conditions, layouts and object arrangements constantly vary.

Training robots to handle this complexity demands enormous amounts of diverse data—something his research aims to provide.

One project Lee is involved in, Imagine, Verify, Execute, offers a framework that allows robots to learn through autonomous exploration—a kind of “robot Montessori” approach—rather than relying solely on pre-programmed instructions. 

“If you record that journey, it becomes training data,” Lee said. “The robot is generating its own experience.”

In a complementary effort, Lee took the lead in developing TraceGen, a system that mines hundreds of thousands of publicly available human videos to extract meaningful hand and object motion. It analyzes “in-the-wild footage” from large datasets and platforms such as YouTube to isolate the movements required to complete specific tasks.

These human-derived behaviors are then used to train robotic systems alongside data generated by robots themselves. Lee describes the robotics data ecosystem as a pyramid: scarce but high-value data derived from operating real robots at the top; more abundant but imperfect simulation data in the middle; and massive quantities of diverse human video data forming the base.

TraceGen was recently accepted to the IEEE/Computer Vision Foundation Conference on Computer Vision and Pattern Recognition, scheduled for June in Denver—an important milestone in the computer vision and robotics communities.

This summer, Lee will further test his approach during an internship with NVIDIA’s Generalist Embodied Agent Research (GEAR) team. There, he will integrate large-scale human video data into advanced robotic platforms to evaluate performance gains in real-world environments.

Lee’s research is advised by Furong Huang and Jia-Bin Huang, associate professors of computer science with appointments in the University of Maryland Institute for Advanced Computer Studies.

“What sets Seungjae apart is his rare combination of vision and execution,” said Furong Huang. “I see him as a rising leader at the intersection of machine learning and robotics, with the potential to shape how intelligent systems learn and interact with the physical world.”

(Photo by Mansi Srivastava M.S. ’26)

AI at Maryland

The University of Maryland is shaping the future of artificial intelligence by forging solutions to the world’s most pressing issues through collaborative research, training the leaders of an AI-infused workforce and applying AI to strengthen our economy and communities.

Read more about how UMD embraces AI’s potential for the public good—without losing sight of the human values that power it.

Related Articles

Research

January 22, 2026
UMD Sociologist Argues That Expert Consensus Guides Science, Society

Research

February 11, 2026
As New Nationwide Study Kicks Off, Researchers Reveal We Pass Far More Gas Than Previously Known