8 research outputs found
Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning
Unsupervised pre-training methods utilizing large and diverse datasets have
achieved tremendous success across a range of domains. Recent work has
investigated such unsupervised pre-training methods for model-based
reinforcement learning (MBRL) but is limited to domain-specific or simulated
data. In this paper, we study the problem of pre-training world models with
abundant in-the-wild videos for efficient learning of downstream visual control
tasks. However, in-the-wild videos are complicated with various contextual
factors, such as intricate backgrounds and textured appearance, which precludes
a world model from extracting shared world knowledge to generalize better. To
tackle this issue, we introduce Contextualized World Models (ContextWM) that
explicitly model both the context and dynamics to overcome the complexity and
diversity of in-the-wild videos and facilitate knowledge transfer between
distinct scenes. Specifically, a contextualized extension of the latent
dynamics model is elaborately realized by incorporating a context encoder to
retain contextual information and empower the image decoder, which allows the
latent dynamics model to concentrate on essential temporal variations. Our
experiments show that in-the-wild video pre-training equipped with ContextWM
can significantly improve the sample-efficiency of MBRL in various domains,
including robotic manipulation, locomotion, and autonomous driving
Recommended from our members
Vision-based Manipulation In-the-Wild
Deploying robots in real-world environments involves immense engineering complexity, potentially surpassing the resources required for autonomous vehicles due to the increased dimensionality and task variety. To maximize the chances of successful real-world deployment, finding a simple solution that minimizes engineering complexity at every level, from hardware to algorithm to operations, is crucial.
In this dissertation, we consider a vision-based manipulation system that can be deployed in-the-wild when trained to imitate sufficient quantity and diversity of human demonstration data on the desired task. At deployment time, the robot is driven by a single diffusion-based visuomotor policy, with raw RGB images as input and robot end-effector pose as output. Compared to existing policy representations, Diffusion Policy handles multimodal action distributions gracefully, being scalable to high-dimensional action spaces and exhibiting impressive training stability. These properties allow a single software system to be used for multiple tasks, with data collected by multiple demonstrators, deployed to multiple robot embodiments, and without significant hyper-parameter tuning.
We developed a Universal Manipulation Interface (UMI), a portable, low-cost, and information-rich data collection system to enable direct manipulation skill learning from in-the-wild human demonstrations. UMI provides an intuitive interface for non-expert users by using hand-held grippers with mounted GoPro cameras. Compared to existing robotic data collection systems, UMI enables robotic data collection without needing a robot, drastically reducing the engineering and operational complexity. Trained with UMI data, the resulting diffusion policies can be deployed across multiple robot platforms in unseen environments for novel objects and to complete dynamic, bimanual, precise, and long-horizon tasks.
The Diffusion Policy and UMI combination provides a simple full-stack solution to many manipulation problems. The turn-around time of building a single-task manipulation system (such as object tossing and cloth folding) can be reduced from a few months to a few days