67 research outputs found
Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models
Generalist robot manipulators need to learn a wide variety of manipulation
skills across diverse environments. Current robot training pipelines rely on
humans to provide kinesthetic demonstrations or to program simulation
environments and to code up reward functions for reinforcement learning. Such
human involvement is an important bottleneck towards scaling up robot learning
across diverse tasks and environments. We propose Generation to Simulation
(Gen2Sim), a method for scaling up robot skill learning in simulation by
automating generation of 3D assets, task descriptions, task decompositions and
reward functions using large pre-trained generative models of language and
vision. We generate 3D assets for simulation by lifting open-world 2D
object-centric images to 3D using image diffusion models and querying LLMs to
determine plausible physics parameters. Given URDF files of generated and
human-developed assets, we chain-of-thought prompt LLMs to map these to
relevant task descriptions, temporal decompositions, and corresponding python
reward functions for reinforcement learning. We show Gen2Sim succeeds in
learning policies for diverse long horizon tasks, where reinforcement learning
with non temporally decomposed reward functions fails. Gen2Sim provides a
viable path for scaling up reinforcement learning for robot manipulators in
simulation, both by diversifying and expanding task and environment
development, and by facilitating the discovery of reinforcement-learned
behaviors through temporal task decomposition in RL. Our work contributes
hundreds of simulated assets, tasks and demonstrations, taking a step towards
fully autonomous robotic manipulation skill acquisition in simulation
Finetuning Offline World Models in the Real World
Reinforcement Learning (RL) is notoriously data-inefficient, which makes
training on a real robot difficult. While model-based RL algorithms (world
models) improve data-efficiency to some extent, they still require hours or
days of interaction to learn skills. Recently, offline RL has been proposed as
a framework for training RL policies on pre-existing datasets without any
online interaction. However, constraining an algorithm to a fixed dataset
induces a state-action distribution shift between training and inference, and
limits its applicability to new tasks. In this work, we seek to get the best of
both worlds: we consider the problem of pretraining a world model with offline
data collected on a real robot, and then finetuning the model on online data
collected by planning with the learned model. To mitigate extrapolation errors
during online interaction, we propose to regularize the planner at test-time by
balancing estimated returns and (epistemic) model uncertainty. We evaluate our
method on a variety of visuo-motor control tasks in simulation and on a real
robot, and find that our method enables few-shot finetuning to seen and unseen
tasks even when offline data is limited. Videos, code, and data are available
at https://yunhaifeng.com/FOWM .Comment: CoRL 2023 Oral; Project website: https://yunhaifeng.com/FOW
- …