3 research outputs found
Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment
With the rapid growth of computing powers and recent advances in deep
learning, we have witnessed impressive demonstrations of novel robot
capabilities in research settings. Nonetheless, these learning systems exhibit
brittle generalization and require excessive training data for practical tasks.
To harness the capabilities of state-of-the-art robot learning models while
embracing their imperfections, we present Sirius, a principled framework for
humans and robots to collaborate through a division of work. In this framework,
partially autonomous robots are tasked with handling a major portion of
decision-making where they work reliably; meanwhile, human operators monitor
the process and intervene in challenging situations. Such a human-robot team
ensures safe deployments in complex tasks. Further, we introduce a new learning
algorithm to improve the policy's performance on the data collected from the
task executions. The core idea is re-weighing training samples with
approximated human trust and optimizing the policies with weighted behavioral
cloning. We evaluate Sirius in simulation and on real hardware, showing that
Sirius consistently outperforms baselines over a collection of contact-rich
manipulation tasks, achieving an 8% boost in simulation and 27% on real
hardware than the state-of-the-art methods, with twice faster convergence and
85% memory size reduction. Videos and code are available at
https://ut-austin-rpl.github.io/sirius
MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations
Imitation learning from a large set of human demonstrations has proved to be
an effective paradigm for building capable robot agents. However, the
demonstrations can be extremely costly and time-consuming to collect. We
introduce MimicGen, a system for automatically synthesizing large-scale, rich
datasets from only a small number of human demonstrations by adapting them to
new contexts. We use MimicGen to generate over 50K demonstrations across 18
tasks with diverse scene configurations, object instances, and robot arms from
just ~200 human demonstrations. We show that robot agents can be effectively
trained on this generated dataset by imitation learning to achieve strong
performance in long-horizon and high-precision tasks, such as multi-part
assembly and coffee preparation, across broad initial state distributions. We
further demonstrate that the effectiveness and utility of MimicGen data compare
favorably to collecting additional human demonstrations, making it a powerful
and economical approach towards scaling up robot learning. Datasets, simulation
environments, videos, and more at https://mimicgen.github.io .Comment: Conference on Robot Learning (CoRL) 202