14 research outputs found
Stochastic Prediction of Multi-Agent Interactions from Partial Observations
We present a method that learns to integrate temporal information, from a
learned dynamics model, with ambiguous visual information, from a learned
vision model, in the context of interacting agents. Our method is based on a
graph-structured variational recurrent neural network (Graph-VRNN), which is
trained end-to-end to infer the current state of the (partially observed)
world, as well as to forecast future states. We show that our method
outperforms various baselines on two sports datasets, one based on real
basketball trajectories, and one generated by a soccer game engine.Comment: ICLR 2019 camera read
Combining Physical Simulators and Object-Based Networks for Control
Physics engines play an important role in robot planning and control;
however, many real-world control problems involve complex contact dynamics that
cannot be characterized analytically. Most physics engines therefore employ .
approximations that lead to a loss in precision. In this paper, we propose a
hybrid dynamics model, simulator-augmented interaction networks (SAIN),
combining a physics engine with an object-based neural network for dynamics
modeling. Compared with existing models that are purely analytical or purely
data-driven, our hybrid model captures the dynamics of interacting objects in a
more accurate and data-efficient manner.Experiments both in simulation and on a
real robot suggest that it also leads to better performance when used in
complex control tasks. Finally, we show that our model generalizes to novel
environments with varying object shapes and materials.Comment: ICRA 2019; Project page: http://sain.csail.mit.ed
Propagation Networks for Model-Based Control Under Partial Observation
There has been an increasing interest in learning dynamics simulators for
model-based control. Compared with off-the-shelf physics engines, a learnable
simulator can quickly adapt to unseen objects, scenes, and tasks. However,
existing models like interaction networks only work for fully observable
systems; they also only consider pairwise interactions within a single time
step, both restricting their use in practical systems. We introduce Propagation
Networks (PropNet), a differentiable, learnable dynamics model that handles
partially observable scenarios and enables instantaneous propagation of signals
beyond pairwise interactions. Experiments show that our propagation networks
not only outperform current learnable physics engines in forward simulation,
but also achieve superior performance on various control tasks. Compared with
existing model-free deep reinforcement learning algorithms, model-based control
with propagation networks is more accurate, efficient, and generalizable to
new, partially observable scenes and tasks.Comment: Accepted to ICRA 2019. Project Page: http://propnet.csail.mit.edu
Video: https://youtu.be/ZAxHXegkz4
Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids
Real-life control tasks involve matters of various substances---rigid or soft
bodies, liquid, gas---each with distinct physical behaviors. This poses
challenges to traditional rigid-body physics engines. Particle-based simulators
have been developed to model the dynamics of these complex scenes; however,
relying on approximation techniques, their simulation often deviates from
real-world physics, especially in the long term. In this paper, we propose to
learn a particle-based simulator for complex control tasks. Combining learning
with particle-based systems brings in two major benefits: first, the learned
simulator, just like other particle-based systems, acts widely on objects of
different materials; second, the particle-based representation poses strong
inductive bias for learning: particles of the same type have the same dynamics
within. This enables the model to quickly adapt to new environments of unknown
dynamics within a few observations. We demonstrate robots achieving complex
manipulation tasks using the learned simulator, such as manipulating fluids and
deformable foam, with experiments both in simulation and in the real world. Our
study helps lay the foundation for robot learning of dynamic scenes with
particle-based representations.Comment: Accepted to ICLR 2019. Project Page: http://dpi.csail.mit.edu Video:
https://www.youtube.com/watch?v=FrPpP7aW3L
Occlusion resistant learning of intuitive physics from videos
To reach human performance on complex tasks, a key ability for artificial
systems is to understand physical interactions between objects, and predict
future outcomes of a situation. This ability, often referred to as intuitive
physics, has recently received attention and several methods were proposed to
learn these physical rules from video sequences. Yet, most of these methods are
restricted to the case where no, or only limited, occlusions occur. In this
work we propose a probabilistic formulation of learning intuitive physics in 3D
scenes with significant inter-object occlusions. In our formulation, object
positions are modeled as latent variables enabling the reconstruction of the
scene. We then propose a series of approximations that make this problem
tractable. Object proposals are linked across frames using a combination of a
recurrent interaction network, modeling the physics in object space, and a
compositional renderer, modeling the way in which objects project onto pixel
space. We demonstrate significant improvements over state-of-the-art in the
intuitive physics benchmark of IntPhys. We apply our method to a second dataset
with increasing levels of occlusions, showing it realistically predicts
segmentation masks up to 30 frames in the future. Finally, we also show results
on predicting motion of objects in real videos
RELATE: Physically Plausible Multi-Object Scene Synthesis Using Structured Latent Spaces
We present RELATE, a model that learns to generate physically plausible
scenes and videos of multiple interacting objects. Similar to other generative
approaches, RELATE is trained end-to-end on raw, unlabeled data. RELATE
combines an object-centric GAN formulation with a model that explicitly
accounts for correlations between individual objects. This allows the model to
generate realistic scenes and videos from a physically-interpretable
parameterization. Furthermore, we show that modeling the object correlation is
necessary to learn to disentangle object positions and identity. We find that
RELATE is also amenable to physically realistic scene editing and that it
significantly outperforms prior art in object-centric scene generation in both
synthetic (CLEVR, ShapeStacks) and real-world data (cars). In addition, in
contrast to state-of-the-art methods in object-centric generative modeling,
RELATE also extends naturally to dynamic scenes and generates videos of high
visual fidelity. Source code, datasets and more results are available at
http://geometry.cs.ucl.ac.uk/projects/2020/relate/