8 research outputs found
Data-Augmented Contact Model for Rigid Body Simulation
Accurately modeling contact behaviors for real-world, near-rigid materials
remains a grand challenge for existing rigid-body physics simulators. This
paper introduces a data-augmented contact model that incorporates analytical
solutions with observed data to predict the 3D contact impulse which could
result in rigid bodies bouncing, sliding or spinning in all directions. Our
method enhances the expressiveness of the standard Coulomb contact model by
learning the contact behaviors from the observed data, while preserving the
fundamental contact constraints whenever possible. For example, a classifier is
trained to approximate the transitions between static and dynamic frictions,
while non-penetration constraint during collision is enforced analytically. Our
method computes the aggregated effect of contact for the entire rigid body,
instead of predicting the contact force for each contact point individually,
removing the exponential decline in accuracy as the number of contact points
increases.Comment: 7 pages, 7 figures. Submitted to ICRA 2019. Added video attachment
with full 3D experiments: https://youtu.be/AKSD8TabDV
TossingBot: Learning to Throw Arbitrary Objects with Residual Physics
We investigate whether a robot arm can learn to pick and throw arbitrary
objects into selected boxes quickly and accurately. Throwing has the potential
to increase the physical reachability and picking speed of a robot arm.
However, precisely throwing arbitrary objects in unstructured settings presents
many challenges: from acquiring reliable pre-throw conditions (e.g. initial
pose of object in manipulator) to handling varying object-centric properties
(e.g. mass distribution, friction, shape) and dynamics (e.g. aerodynamics). In
this work, we propose an end-to-end formulation that jointly learns to infer
control parameters for grasping and throwing motion primitives from visual
observations (images of arbitrary objects in a bin) through trial and error.
Within this formulation, we investigate the synergies between grasping and
throwing (i.e., learning grasps that enable more accurate throws) and between
simulation and deep learning (i.e., using deep networks to predict residuals on
top of control parameters predicted by a physics simulator). The resulting
system, TossingBot, is able to grasp and throw arbitrary objects into boxes
located outside its maximum reach range at 500+ mean picks per hour (600+
grasps per hour with 85% throwing accuracy); and generalizes to new objects and
target locations. Videos are available at https://tossingbot.cs.princeton.eduComment: Summary Video: https://youtu.be/f5Zn2Up2RjQ Project webpage:
https://tossingbot.cs.princeton.ed