178 research outputs found
Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks
Rearranging and manipulating deformable objects such as cables, fabrics, and
bags is a long-standing challenge in robotic manipulation. The complex dynamics
and high-dimensional configuration spaces of deformables, compared to rigid
objects, make manipulation difficult not only for multi-step planning, but even
for goal specification. Goals cannot be as easily specified as rigid object
poses, and may involve complex relative spatial relations such as "place the
item inside the bag". In this work, we develop a suite of simulated benchmarks
with 1D, 2D, and 3D deformable structures, including tasks that involve
image-based goal-conditioning and multi-step deformable manipulation. We
propose embedding goal-conditioning into Transporter Networks, a recently
proposed model architecture for learning robotic manipulation that rearranges
deep features to infer displacements that can represent pick and place actions.
We demonstrate that goal-conditioned Transporter Networks enable agents to
manipulate deformable structures into flexibly specified configurations without
test-time visual anchors for target locations. We also significantly extend
prior results using Transporter Networks for manipulating deformable objects by
testing on tasks with 2D and 3D deformables. Supplementary material is
available at https://berkeleyautomation.github.io/bags/.Comment: See https://berkeleyautomation.github.io/bags/ for project website
and code; v2 corrects some BibTeX entries, v3 is ICRA 2021 version (minor
revisions
Simpler learning of robotic manipulation of clothing by utilizing DIY smart textile technology
Deformable objects such as ropes, wires, and clothing are omnipresent in society and industry but are little researched in robotics research. This is due to the infinite amount of possible state configurations caused by the deformations of the deformable object. Engineered approaches try to cope with this by implementing highly complex operations in order to estimate the state of the deformable object. This complexity can be circumvented by utilizing learning-based approaches, such as reinforcement learning, which can deal with the intrinsic high-dimensional state space of deformable objects. However, the reward function in reinforcement learning needs to measure the state configuration of the highly deformable object. Vision-based reward functions are difficult to implement, given the high dimensionality of the state and complex dynamic behavior. In this work, we propose the consideration of concepts beyond vision and incorporate other modalities which can be extracted from deformable objects. By integrating tactile sensor cells into a textile piece, proprioceptive capabilities are gained that are valuable as they provide a reward function to a reinforcement learning agent. We demonstrate on a low-cost dual robotic arm setup that a physical agent can learn on a single CPU core to fold a rectangular patch of textile in the real world based on a learned reward function from tactile information
Sim-to-Real Reinforcement Learning for Deformable Object Manipulation
We have seen much recent progress in rigid object manipulation, but
interaction with deformable objects has notably lagged behind. Due to the large
configuration space of deformable objects, solutions using traditional
modelling approaches require significant engineering work. Perhaps then,
bypassing the need for explicit modelling and instead learning the control in
an end-to-end manner serves as a better approach? Despite the growing interest
in the use of end-to-end robot learning approaches, only a small amount of work
has focused on their applicability to deformable object manipulation. Moreover,
due to the large amount of data needed to learn these end-to-end solutions, an
emerging trend is to learn control policies in simulation and then transfer
them over to the real world. To-date, no work has explored whether it is
possible to learn and transfer deformable object policies. We believe that if
sim-to-real methods are to be employed further, then it should be possible to
learn to interact with a wide variety of objects, and not only rigid objects.
In this work, we use a combination of state-of-the-art deep reinforcement
learning algorithms to solve the problem of manipulating deformable objects
(specifically cloth). We evaluate our approach on three tasks --- folding a
towel up to a mark, folding a face towel diagonally, and draping a piece of
cloth over a hanger. Our agents are fully trained in simulation with domain
randomisation, and then successfully deployed in the real world without having
seen any real deformable objects.Comment: Published at the Conference on Robot Learning (CoRL) 201
Deep Learning of Force Manifolds from the Simulated Physics of Robotic Paper Folding
Robotic manipulation of slender objects is challenging, especially when the
induced deformations are large and nonlinear. Traditionally, learning-based
control approaches, such as imitation learning, have been used to address
deformable material manipulation. These approaches lack generality and often
suffer critical failure from a simple switch of material, geometric, and/or
environmental (e.g., friction) properties. This article tackles a fundamental
but difficult deformable manipulation task: forming a predefined fold in paper
with only a single manipulator. A data-driven framework combining
physically-accurate simulation and machine learning is used to train a deep
neural network capable of predicting the external forces induced on the
manipulated paper given a grasp position. We frame the problem using scaling
analysis, resulting in a control framework robust against material and
geometric changes. Path planning is then carried out over the generated "neural
force manifold" to produce robot manipulation trajectories optimized to prevent
sliding, with offline trajectory generation finishing 15 faster than
previous physics-based folding methods. The inference speed of the trained
model enables the incorporation of real-time visual feedback to achieve
closed-loop sensorimotor control. Real-world experiments demonstrate that our
framework can greatly improve robotic manipulation performance compared to
state-of-the-art folding strategies, even when manipulating paper objects of
various materials and shapes.Comment: Supplementary video is available on YouTube:
https://youtu.be/k0nexYGy-P
Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects
Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe
HANDLOOM: Learned Tracing of One-Dimensional Objects for Inspection and Manipulation
Tracing - estimating the spatial state of - long deformable linear objects
such as cables, threads, hoses, or ropes, is useful for a broad range of tasks
in homes, retail, factories, construction, transportation, and healthcare. For
long deformable linear objects (DLOs or simply cables) with many (over 25)
crossings, we present HANDLOOM (Heterogeneous Autoregressive Learned Deformable
Linear Object Observation and Manipulation), a learning-based algorithm that
fits a trace to a greyscale image of cables. We evaluate HANDLOOM on
semi-planar DLO configurations where each crossing involves at most 2 segments.
HANDLOOM makes use of neural networks trained with 30,000 simulated examples
and 568 real examples to autoregressively estimate traces of cables and
classify crossings. Experiments find that in settings with multiple identical
cables, HANDLOOM can trace each cable with 80% accuracy. In single-cable
images, HANDLOOM can trace and identify knots with 77% accuracy. When HANDLOOM
is incorporated into a bimanual robot system, it enables state-based imitation
of knot tying with 80% accuracy, and it successfully untangles 64% of cable
configurations across 3 levels of difficulty. Additionally, HANDLOOM
demonstrates generalization to knot types and materials (rubber, cloth rope)
not present in the training dataset with 85% accuracy. Supplementary material,
including all code and an annotated dataset of RGB-D images of cables along
with ground-truth traces, is at https://sites.google.com/view/cable-tracing
- …