54 research outputs found

    A Grasping-centered Analysis for Cloth Manipulation

    Get PDF
    Compliant and soft hands have gained a lot of attention in the past decade because of their ability to adapt to the shape of the objects, increasing their effectiveness for grasping. However, when it comes to grasping highly flexible objects such as textiles, we face the dual problem: it is the object that will adapt to the shape of the hand or gripper. In this context, the classic grasp analysis or grasping taxonomies are not suitable for describing textile objects grasps. This work proposes a novel definition of textile object grasps that abstracts from the robotic embodiment or hand shape and recovers concepts from the early neuroscience literature on hand prehension skills. This framework enables us to identify what grasps have been used in literature until now to perform robotic cloth manipulation, and allows for a precise definition of all the tasks that have been tackled in terms of manipulation primitives based on regrasps. In addition, we also review what grippers have been used. Our analysis shows how the vast majority of cloth manipulations have relied only on one type of grasp, and at the same time we identify several tasks that need more variety of grasp types to be executed successfully. Our framework is generic, provides a classification of cloth manipulation primitives and can inspire gripper design and benchmark construction for cloth manipulation.Comment: 13 pages, 4 figures, 4 tables. Accepted for publication at IEEE Transactions on Robotic

    Benchmarking bimanual cloth manipulation

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Cloth manipulation is a challenging task that, despite its importance, has received relatively little attention compared to rigid object manipulation. In this paper, we provide three benchmarks for evaluation and comparison of different approaches towards three basic tasks in cloth manipulation: spreading a tablecloth over a table, folding a towel, and dressing. The tasks can be executed on any bimanual robotic platform and the objects involved in the tasks are standardized and easy to acquire. We provide several complexity levels for each task, and describe the quality measures to evaluate task execution. Furthermore, we provide baseline solutions for all the tasks and evaluate them according to the proposed metrics.Peer ReviewedPostprint (author's final draft

    Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary

    Full text link
    The complex physical properties of highly deformable materials such as clothes pose significant challenges fanipulation systems. We present a novel visual feedback dictionary-based method for manipulating defoor autonomous robotic mrmable objects towards a desired configuration. Our approach is based on visual servoing and we use an efficient technique to extract key features from the RGB sensor stream in the form of a histogram of deformable model features. These histogram features serve as high-level representations of the state of the deformable material. Next, we collect manipulation data and use a visual feedback dictionary that maps the velocity in the high-dimensional feature space to the velocity of the robotic end-effectors for manipulation. We have evaluated our approach on a set of complex manipulation tasks and human-robot manipulation tasks on different cloth pieces with varying material characteristics.Comment: The video is available at goo.gl/mDSC4

    Cloth manipulation and perception competition

    Get PDF
    In the last decade, several competitions in robotic manipulation have been organised as a way to drive scientific progress in the field. They enable comparison of different approaches through a well-defined benchmark with equal test conditions. However, current competitions usually focus on rigid-object manipulation, leaving behind the challenges that suppose grasping deformable objects, especially highly-deformable ones as cloth-like objects. In this paper, we want to present the first competition in perception and manipulation of textile objects as an eficient method to accelerate scientific progress in the domain of domestic service robots. To do so, we selected a small set of tasks to benchmark in a common framework using the same set of objects and assessment methods. This competition has been conceived to freely distribute the Household Cloth Object Set to research groups working on cloth manipulation and perception and participate on the challenge. In this work, we present an overview of the tasks that are proposed in the competition, detailed descriptions of the tasks and more information on the scoring and rules are provided in the website http://www.iri.upc.edu/groups/perception/ClothManipulationChallenge/Peer ReviewedPostprint (published version

    Feedback-based Fabric Strip Folding

    Full text link
    Accurate manipulation of a deformable body such as a piece of fabric is difficult because of its many degrees of freedom and unobservable properties affecting its dynamics. To alleviate these challenges, we propose the application of feedback-based control to robotic fabric strip folding. The feedback is computed from the low dimensional state extracted from a camera image. We trained the controller using reinforcement learning in simulation which was calibrated to cover the real fabric strip behaviors. The proposed feedback-based folding was experimentally compared to two state-of-the-art folding methods and our method outperformed both of them in terms of accuracy.Comment: Submitted to IEEE/RSJ IROS201

    Robotic Ironing with 3D Perception and Force/Torque Feedback in Household Environments

    Full text link
    As robotic systems become more popular in household environments, the complexity of required tasks also increases. In this work we focus on a domestic chore deemed dull by a majority of the population, the task of ironing. The presented algorithm improves on the limited number of previous works by joining 3D perception with force/torque sensing, with emphasis on finding a practical solution with a feasible implementation in a domestic setting. Our algorithm obtains a point cloud representation of the working environment. From this point cloud, the garment is segmented and a custom Wrinkleness Local Descriptor (WiLD) is computed to determine the location of the present wrinkles. Using this descriptor, the most suitable ironing path is computed and, based on it, the manipulation algorithm performs the force-controlled ironing operation. Experiments have been performed with a humanoid robot platform, proving that our algorithm is able to detect successfully wrinkles present in garments and iteratively reduce the wrinkleness using an unmodified iron.Comment: Accepted and to be published on the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) that will be held in Vancouver, Canada, September 24-28, 201

    Simpler learning of robotic manipulation of clothing by utilizing DIY smart textile technology

    Get PDF
    Deformable objects such as ropes, wires, and clothing are omnipresent in society and industry but are little researched in robotics research. This is due to the infinite amount of possible state configurations caused by the deformations of the deformable object. Engineered approaches try to cope with this by implementing highly complex operations in order to estimate the state of the deformable object. This complexity can be circumvented by utilizing learning-based approaches, such as reinforcement learning, which can deal with the intrinsic high-dimensional state space of deformable objects. However, the reward function in reinforcement learning needs to measure the state configuration of the highly deformable object. Vision-based reward functions are difficult to implement, given the high dimensionality of the state and complex dynamic behavior. In this work, we propose the consideration of concepts beyond vision and incorporate other modalities which can be extracted from deformable objects. By integrating tactile sensor cells into a textile piece, proprioceptive capabilities are gained that are valuable as they provide a reward function to a reinforcement learning agent. We demonstrate on a low-cost dual robotic arm setup that a physical agent can learn on a single CPU core to fold a rectangular patch of textile in the real world based on a learned reward function from tactile information

    Towards clothes hanging via cloth simulation and deep convolutional networks

    Get PDF
    Proceeding of: 10th EUROSIM Congress on Modelling and Simulation (EUROSIM 2019), Logroño, La Rioja, Spain, July 1-5, 2019People spend several hours a week doing laundry, with hanging clothes being one of the laundry tasks to be performed. Nevertheless, deformable object manipulation still proves to be a challenge for most robotic systems, due to the extremely large number of internal degrees of freedom of a piece of clothing and its chaotic nature. This work presents a step towards automated robot clothes hanging by modeling the dynamics of the hanging task via deep convolutional models. Two models are developed to address two different problems: determining if the garment will hang or not (classification), and estimating the future garment location in space (regression). Both models have been trained with a synthetic dataset formed by 15k examples generated though a dynamic simulation of a deformable object. Experiments show that the deep convolutional models presented perform better than a human expert, and that future predictions are largely influenced by time, with uncertainty influencing directly the accuracy of the predictions.This work was supported by RoboCity2030-III-CM project (S2013/MIT-2748), funded by Programas de Actividades I+D in Comunidad de Madrid and EU and by a FPU grant funded by Ministerio de Educación, Cultura y Deporte. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the NVIDIA Titan X GPU used for this research

    Sim-to-real reinforcement learning for deformable object manipulation

    Get PDF
    We have seen much recent progress in rigid object manipulation, but in- teraction with deformable objects has notably lagged behind. Due to the large con- figuration space of deformable objects, solutions using traditional modelling ap- proaches require significant engineering work. Perhaps then, bypassing the need for explicit modelling and instead learning the control in an end-to-end manner serves as a better approach? Despite the growing interest in the use of end-to-end robot learning approaches, only a small amount of work has focused on their ap- plicability to deformable object manipulation. Moreover, due to the large amount of data needed to learn these end-to-end solutions, an emerging trend is to learn control policies in simulation and then transfer them over to the real world. To- date, no work has explored whether it is possible to learn and transfer deformable object policies. We believe that if sim-to-real methods are to be employed fur- ther, then it should be possible to learn to interact with a wide variety of objects, and not only rigid objects. In this work, we use a combination of state-of-the-art deep reinforcement learning algorithms to solve the problem of manipulating de- formable objects (specifically cloth). We evaluate our approach on three tasks — folding a towel up to a mark, folding a face towel diagonally, and draping a piece of cloth over a hanger. Our agents are fully trained in simulation with domain randomisation, and then successfully deployed in the real world without having seen any real deformable objects
    • …
    corecore