6 research outputs found

    Learning to Put On a Knit Cap in a Head-Centric Policy Space

    Get PDF
    Twardon L, Ritter H. Learning to Put On a Knit Cap in a Head-Centric Policy Space. IEEE ROBOTICS AND AUTOMATION LETTERS. 2018;3(2):764-771.Robotic manipulation of such highly deformable objects as clothes is a challenging problem. Robot-assisted dressing adds even more complexity as the garment motions must be aligned with a human body under conditions of strong and variable occlusion. As a step toward solutions for the general task, we consider the example of a dual-arm robot with attached anthropomorphic hands that learns to put a knit cap on a styrofoam head. Our approach avoids modeling the details of the garment and its deformations. Instead, we demonstrate that a head-centric policy parameterization, combined with a suitable objective function for determining the right amount of contact between the cap and the head, enables a direct policy search algorithm to find successful trajectories for this task. We also show how a toy problem that mirrors some of the task constraints can be used to efficiently structure hyperparameter search. Additionally, we suggest a point cloud based algorithm for modeling the head as an ellipsoid which is required for defining the policy space

    Occlusion-Robust Autonomous Robotic Manipulation of Human Soft Tissues With 3D Surface Feedback

    Get PDF
    Robotic manipulation of 3D soft objects remains challenging in the industrial and medical fields. Various methods based on mechanical modelling, data-driven approaches or explicit feature tracking have been proposed. A unifying disadvantage of these methods is the high computational cost of simultaneous imaging processing, identification of mechanical properties, and motion planning, leading to a need for less computationally intensive methods. We propose a method for autonomous robotic manipulation with 3D surface feedback to solve these issues. First, we produce a deformation model of the manipulated object, which estimates the robots' movements by monitoring the displacement of surface points surrounding the manipulators. Then, we develop a 6-degree-of-freedom velocity controller to manipulate the grasped object to achieve a desired shape. We validate our approach through comparative simulations with existing methods and experiments using phantom and cadaveric soft tissues with the da Vinci Research Kit. The results demonstrate the robustness of the technique to occlusions and various materials. Compared to state-of-the-art linear and data-driven methods, our approach is more precise by 46.5% and 15.9% and saves 55.2% and 25.7% manipulation time, respectively

    Learning to Put On a Knit Cap in a Head-Centric Policy Space

    No full text

    Robotic perception and manipulation of garments

    Get PDF
    This thesis introduces an effective robotic garment flattening pipeline and robotic perception paradigms for predicting garments’ geometric (shape) and physics properties. Robotic garment manipulation is a popular and challenging task in robotic research. Due to the high dimensionality of garments, object states of garments are infinite. Also, garments deform irregularly during manipulations, which makes predicting their deformations difficult. However, robotic garment manipulation is an essential topic in robotic research. Robotic laundry and household sorting play a vital role in an ageing society, and automated manufacturing requires robots to be able to grasp different mechanical components, some of which are deformable objects. Also, robot-aided garment dressing is essential for the community with disabilities. Therefore, designing and implementing effective robotic garment manipulation pipelines are necessary but challenging. This thesis mainly focuses on designing an effective robotic garment flattening pipeline. Therefore, this thesis is divided into two main parts: robotic perception and robotic manipulation. Below is a summary of the research in this PhD thesis: • Robotic perception provides prior knowledge on garment attributes (geometrical (shape) and physics properties) that facilitates robotic garment flattening. Continuous perception paradigms are introduced for predicting shapes and visually perceived garments weights. • A reality-simulation knowledge transferring paradigm for predicting the physics properties of real garments and fabrics has been proposed in this thesis. • The second part of this thesis is robotic manipulation. This thesis suggests learning the known configurations of garments with prior knowledge of garments’ geometric (shape) properties and selecting pre-designed manipulation strategies to flatten garments. The robotic manipulation part takes advantage of the geometric (shape) properties learned from the robotic perception part to recognise the known configurations of garments, demonstrating the importance of robotic perception in robotic manipulation. The experiment results of this thesis revealed that: 1). A robot gains confidence in prediction (shapes and visually perceived weights of unseen garments) from continuously perceiving video frames of unseen garments being grasped, where high accuracies on predictions (93% for shapes and 98.5 % for visually perceived weights) are obtained; 2). Predicting the physics properties of real garments and fabrics can be realised by learning physics similarities between simulated fabrics. The approach in this thesis outperforms SOTA (34 % improvement on real fabrics and 68.1 % improvement for real garments); 3). Compared with state-of-the-art robotic garment flattening, this thesis enables the flattening of garments of various shapes (five shapes) and fast and effective manipulations. Therefore, this thesis advanced SOTA of robotic perception and manipulation (flattening) of garments

    Bimanual Interaction with Clothes. Topology, Geometry, and Policy Representations in Robots

    Get PDF
    Twardon L. Bimanual Interaction with Clothes. Topology, Geometry, and Policy Representations in Robots. Bielefeld: Universität Bielefeld; 2019.If anthropomorphic robots are to assist people with activities of daily living, they must be able to handle all kinds of everyday objects, including highly deformable ones such as garments. The present thesis begins with a detailed problem analysis of robotic interaction with and perception of clothes. We show that handling items of clothing is very challenging due to their complex dynamics and the vast number of degrees of freedom. As a result of our analysis, we obtain a topological, geometric, and functional description of garments that supports the development of reduced object and task representations. One of the key findings is that the boundary components, which typically correspond with the openings, characterize garments well, both in terms of their topology and their inherent purpose, namely dressing. We present a polygon-based and an interactive method for identifying boundary components using RGB-D vision with application to grasping. Moreover, we propose Active Boundary Component Models (ABCMs), a constraint-based framework for tracking garment openings with point clouds. It is often difficult to maintain an accurate representation of the objects involved in contact-rich interaction tasks such as dressing assistance. Therefore, our policy optimization approach to putting a knit cap on a styrofoam head avoids modeling the details of the garment and its deformations. The experimental results suggest that a heuristic performance measure that takes into account the amount of contact established between the two objects is suitable for the task
    corecore