1,559 research outputs found
Critically fast pick-and-place with suction cups
Fast robotics pick-and-place with suction cups is a crucial component in the
current development of automation in logistics (factory lines, e-commerce,
etc.). By "critically fast" we mean the fastest possible movement for
transporting an object such that it does not slip or fall from the suction cup.
The main difficulties are: (i) handling the contact between the suction cup and
the object, which fundamentally involves kinodynamic constraints; and (ii)
doing so at a low computational cost, typically a few hundreds of milliseconds.
To address these difficulties, we propose (a) a model for suction cup contacts,
(b) a procedure to identify the contact stability constraint based on that
model, and (c) a pipeline to parameterize, in a time-optimal manner, arbitrary
geometric paths under the identified contact stability constraint. We
experimentally validate the proposed pipeline on a physical robot system: the
cycle time for a typical pick-and-place task was less than 5 seconds, planning
and execution times included. The full pipeline is released as open-source for
the robotics community.Comment: 7 pages, 5 figure
Optimization Model for Planning Precision Grasps with Multi-Fingered Hands
Precision grasps with multi-fingered hands are important for precise
placement and in-hand manipulation tasks. Searching precision grasps on the
object represented by point cloud, is challenging due to the complex object
shape, high-dimensionality, collision and undesired properties of the sensing
and positioning. This paper proposes an optimization model to search for
precision grasps with multi-fingered hands. The model takes noisy point cloud
of the object as input and optimizes the grasp quality by iteratively searching
for the palm pose and finger joints positions. The collision between the hand
and the object is approximated and penalized by a series of least-squares. The
collision approximation is able to handle the point cloud representation of the
objects with complex shapes. The proposed optimization model is able to locate
collision-free optimal precision grasps efficiently. The average computation
time is 0.50 sec/grasp. The searching is robust to the incompleteness and noise
of the point cloud. The effectiveness of the algorithm is demonstrated by
experiments.Comment: Submitted to IROS2019, experiment on BarrettHand, 8 page
Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning
Model-free deep reinforcement learning algorithms have been shown to be
capable of learning a wide range of robotic skills, but typically require a
very large number of samples to achieve good performance. Model-based
algorithms, in principle, can provide for much more efficient learning, but
have proven difficult to extend to expressive, high-capacity models such as
deep neural networks. In this work, we demonstrate that medium-sized neural
network models can in fact be combined with model predictive control (MPC) to
achieve excellent sample complexity in a model-based reinforcement learning
algorithm, producing stable and plausible gaits to accomplish various complex
locomotion tasks. We also propose using deep neural network dynamics models to
initialize a model-free learner, in order to combine the sample efficiency of
model-based approaches with the high task-specific performance of model-free
methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure
model-based approach trained on just random action data can follow arbitrary
trajectories with excellent sample efficiency, and that our hybrid algorithm
can accelerate model-free learning on high-speed benchmark tasks, achieving
sample efficiency gains of 3-5x on swimmer, cheetah, hopper, and ant agents.
Videos can be found at https://sites.google.com/view/mbm
DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact
Cloth simulation has wide applications in computer animation, garment design,
and robot-assisted dressing. This work presents a differentiable cloth
simulator whose additional gradient information facilitates cloth-related
applications. Our differentiable simulator extends a state-of-the-art cloth
simulator based on Projective Dynamics (PD) and with dry frictional contact. We
draw inspiration from previous work to propose a fast and novel method for
deriving gradients in PD-based cloth simulation with dry frictional contact.
Furthermore, we conduct a comprehensive analysis and evaluation of the
usefulness of gradients in contact-rich cloth simulation. Finally, we
demonstrate the efficacy of our simulator in a number of downstream
applications, including system identification, trajectory optimization for
assisted dressing, closed-loop control, inverse design, and real-to-sim
transfer. We observe a substantial speedup obtained from using our gradient
information in solving most of these applications
Learning Contact-Rich Manipulation Skills with Guided Policy Search
Autonomous learning of object manipulation skills can enable robots to
acquire rich behavioral repertoires that scale to the variety of objects found
in the real world. However, current motion skill learning methods typically
restrict the behavior to a compact, low-dimensional representation, limiting
its expressiveness and generality. In this paper, we extend a recently
developed policy search method \cite{la-lnnpg-14} and use it to learn a range
of dynamic manipulation behaviors with highly general policy representations,
without using known models or example demonstrations. Our approach learns a set
of trajectories for the desired motion skill by using iteratively refitted
time-varying linear models, and then unifies these trajectories into a single
control policy that can generalize to new situations. To enable this method to
run on a real robot, we introduce several improvements that reduce the sample
count and automate parameter selection. We show that our method can acquire
fast, fluent behaviors after only minutes of interaction time, and can learn
robust controllers for complex tasks, including putting together a toy
airplane, stacking tight-fitting lego blocks, placing wooden rings onto
tight-fitting pegs, inserting a shoe tree into a shoe, and screwing bottle caps
onto bottles
Real-Time Online Re-Planning for Grasping Under Clutter and Uncertainty
We consider the problem of grasping in clutter. While there have been motion
planners developed to address this problem in recent years, these planners are
mostly tailored for open-loop execution. Open-loop execution in this domain,
however, is likely to fail, since it is not possible to model the dynamics of
the multi-body multi-contact physical system with enough accuracy, neither is
it reasonable to expect robots to know the exact physical properties of
objects, such as frictional, inertial, and geometrical. Therefore, we propose
an online re-planning approach for grasping through clutter. The main challenge
is the long planning times this domain requires, which makes fast re-planning
and fluent execution difficult to realize. In order to address this, we propose
an easily parallelizable stochastic trajectory optimization based algorithm
that generates a sequence of optimal controls. We show that by running this
optimizer only for a small number of iterations, it is possible to perform real
time re-planning cycles to achieve reactive manipulation under clutter and
uncertainty.Comment: Published as a conference paper in IEEE Humanoids 201
- …