166 research outputs found
Robot Learning-Based Pipeline for Autonomous Reshaping of a Deformable Linear Object in Cluttered Backgrounds
open2noThis work was supported in part by the European Union’s Horizon 2020 Research and Innovation Program as part of RIA Project Robotic
tEchnologies for the Manipulation of cOmplex DeformablE Linear objects (REMODEL) under Grant 870133.In this work, the robotic manipulation of a highly Deformable Linear Object (DLO) is addressed by means of a sequence of pick-and-drop primitives driven by visual data. A decision making process learns the optimal grasping location exploiting deep Q-learning and finds the best releasing point from a path representation of the DLO shape. The system effectively combines a state-of-the-art algorithm for semantic segmentation specifically designed for DLOs with deep reinforcement learning. Experimental results show that our system is capable to manipulate a DLO into a variety of different shapes in few steps. The intermediate steps of deformation that lead the object from its initial configuration to the target one are also provided and analyzed.openZanella R.; Palli G.Zanella R.; Palli G
Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects
Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe
Sim2Real Neural Controllers for Physics-based Robotic Deployment of Deformable Linear Objects
Deformable linear objects (DLOs), such as rods, cables, and ropes, play
important roles in daily life. However, manipulation of DLOs is challenging as
large geometrically nonlinear deformations may occur during the manipulation
process. This problem is made even more difficult as the different deformation
modes (e.g., stretching, bending, and twisting) may result in elastic
instabilities during manipulation. In this paper, we formulate a physics-guided
data-driven method to solve a challenging manipulation task -- accurately
deploying a DLO (an elastic rod) onto a rigid substrate along various
prescribed patterns. Our framework combines machine learning, scaling analysis,
and physical simulations to develop a physics-based neural controller for
deployment. We explore the complex interplay between the gravitational and
elastic energies of the manipulated DLO and obtain a control method for DLO
deployment that is robust against friction and material properties. Out of the
numerous geometrical and material properties of the rod and substrate, we show
that only three non-dimensional parameters are needed to describe the
deployment process with physical analysis. Therefore, the essence of the
controlling law for the manipulation task can be constructed with a
low-dimensional model, drastically increasing the computation speed. The
effectiveness of our optimal control scheme is shown through a comprehensive
robotic case study comparing against a heuristic control method for deploying
rods for a wide variety of patterns. In addition to this, we also showcase the
practicality of our control scheme by having a robot accomplish challenging
high-level tasks such as mimicking human handwriting, cable placement, and
tying knots.Comment: YouTube video: https://youtu.be/OSD6dhOgyMA?feature=share
DeRi-Bot: Learning to Collaboratively Manipulate Rigid Objects via Deformable Objects
Recent research efforts have yielded significant advancements in manipulating
objects under homogeneous settings where the robot is required to either
manipulate rigid or deformable (soft) objects. However, the manipulation under
heterogeneous setups that involve both rigid and one-dimensional (1D)
deformable objects remains an unexplored area of research. Such setups are
common in various scenarios that involve the transportation of heavy objects
via ropes, e.g., on factory floors, at disaster sites, and in forestry. To
address this challenge, we introduce DeRi-Bot, the first framework that enables
the collaborative manipulation of rigid objects with deformable objects. Our
framework comprises an Action Prediction Network (APN) and a Configuration
Prediction Network (CPN) to model the complex pattern and stochasticity of
soft-rigid body systems. We demonstrate the effectiveness of DeRi-Bot in moving
rigid objects to a target position with ropes connected to robotic arms.
Furthermore, DeRi-Bot is a distributive method that can accommodate an
arbitrary number of robots or human partners without reconfiguration or
retraining. We evaluate our framework in both simulated and real-world
environments and show that it achieves promising results with strong
generalization across different types of objects and multi-agent settings,
including human-robot collaboration.Comment: This paper has been accepted by IEEE RA-
Tightly-coupled manipulation pipelines: Combining traditional pipelines and end-to-end learning
Traditionally, robot manipulation tasks are solved by engineering solutions in a modular fashion --- typically consisting of object detection, pose estimation, grasp planning, motion planning, and finally run a control algorithm to execute the planned motion. This traditional approach to robot manipulation separates the hard problem of manipulation into several self-contained stages, which can be developed independently, and gives interpretable outputs at each stage of the pipeline. However, this approach comes with a plethora of issues, most notably, their generalisability to a broad range of tasks; it is common that as tasks get more difficult, the systems become increasingly complex.
To combat the flaws of these systems, recent trends have seen robots visually learning to predict actions and grasp locations directly from sensor input in an end-to-end manner using deep neural networks, without the need to explicitly model the in-between modules. This thesis investigates a sample of methods, which fall somewhere on a spectrum from pipelined to fully end-to-end, which we believe to be more advantageous for developing a general manipulation system; one that could eventually be used in highly dynamic and unpredictable household environments.
The investigation starts at the far end of the spectrum, where we explore learning an end-to-end controller in simulation and then transferring to the real world by employing domain randomisation, and finish on the other end, with a new pipeline, where the individual modules bear little resemblance to the "traditional" ones. The thesis concludes with a proposition of a new paradigm: Tightly-coupled Manipulation Pipelines (TMP). Rather than learning all modules implicitly in one large, end-to-end network or conversely, having individual, pre-defined modules that are developed independently, TMPs suggest taking the best of both world by tightly coupling actions to observations, whilst still maintaining structure via an undefined number of learned modules, which do not have to bear any resemblance to the modules seen in "traditional" systems.Open Acces
Robots that Learn and Plan — Unifying Robot Learning and Motion Planning for Generalized Task Execution
Robots have the potential to assist people with a variety of everyday tasks, but to achieve that potential robots require software capable of planning and executing motions in cluttered environments. To address this, over the past few decades, roboticists have developed numerous methods for planning motions to avoid obstacles with increasingly stronger guarantees, from probabilistic completeness to asymptotic optimality. Some of these methods have even considered the types of constraints that must be satisfied to perform useful tasks, but these constraints must generally be manually specified. In recent years, there has been a resurgence of methods for automatic learning of tasks from human-provided demonstrations. Unfortunately, these two fields, task learning and motion planning, have evolved largely separate from one another, and the learned models are often not usable by motion planners.
In this thesis, we aim to bridge the gap between robot task learning and motion planning by employing a learned task model that can subsequently be leveraged by an asymptotically-optimal motion planner to autonomously execute the task. First, we show that application of a motion planner enables task performance while avoiding novel obstacles and extend this to dynamic environments by replanning at reactive rates. Second, we generalize the method to accommodate time-invariant model parameters, allowing more information to be gleaned from the demonstrations. Third, we describe a more principled approach to temporal registration for such learning methods that mirrors the ultimate integration with a motion planner and often reduces the number of demonstrations required.
Finally, we extend this framework to the domain of mobile manipulation. We empirically evaluate each of these contributions on multiple household tasks using the Aldebaran Nao, Rethink Robotics Baxter, and Fetch mobile manipulator robots to show that these approaches improve task execution success rates and reduce the amount of human-provided information required.Doctor of Philosoph
- …