51 research outputs found

    Benchmarking bimanual cloth manipulation

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Cloth manipulation is a challenging task that, despite its importance, has received relatively little attention compared to rigid object manipulation. In this paper, we provide three benchmarks for evaluation and comparison of different approaches towards three basic tasks in cloth manipulation: spreading a tablecloth over a table, folding a towel, and dressing. The tasks can be executed on any bimanual robotic platform and the objects involved in the tasks are standardized and easy to acquire. We provide several complexity levels for each task, and describe the quality measures to evaluate task execution. Furthermore, we provide baseline solutions for all the tasks and evaluate them according to the proposed metrics.Peer ReviewedPostprint (author's final draft

    Cloth manipulation and perception competition

    Get PDF
    In the last decade, several competitions in robotic manipulation have been organised as a way to drive scientific progress in the field. They enable comparison of different approaches through a well-defined benchmark with equal test conditions. However, current competitions usually focus on rigid-object manipulation, leaving behind the challenges that suppose grasping deformable objects, especially highly-deformable ones as cloth-like objects. In this paper, we want to present the first competition in perception and manipulation of textile objects as an eficient method to accelerate scientific progress in the domain of domestic service robots. To do so, we selected a small set of tasks to benchmark in a common framework using the same set of objects and assessment methods. This competition has been conceived to freely distribute the Household Cloth Object Set to research groups working on cloth manipulation and perception and participate on the challenge. In this work, we present an overview of the tasks that are proposed in the competition, detailed descriptions of the tasks and more information on the scoring and rules are provided in the website http://www.iri.upc.edu/groups/perception/ClothManipulationChallenge/Peer ReviewedPostprint (published version

    Household cloth object set: fostering benchmarking in deformable object manipulation

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksBenchmarking of robotic manipulations is one of the open issues in robotic research. An important factor that has enabled progress in this area in the last decade is the existence of common object sets that have been shared among different research groups. However, the existing object sets are very limited when it comes to cloth-like objects that have unique particularities and challenges. This paper is a first step towards the design of a cloth object set to be distributed among research groups from the robotics cloth manipulation community. We present a set of household cloth objects and related tasks that serve to expose the challenges related to gathering such an object set and propose a roadmap to the design of common benchmarks in cloth manipulation tasks, with the intention to set the grounds for a future debate in the community that will be necessary to foster benchmarking for the manipulation of cloth-like objects. Some RGB-D and object scans are collected as examples for the objects in relevant configurations and shared in http://www.iri.upc.edu/groups/perception/ClothObjectSet/Peer ReviewedPostprint (author's final draft

    Benchmarking cloth manipulation using action graphs: an example in placing flat

    Get PDF
    Benchmarking robotic manipulation is complex due to the difficulty in reproducing and comparing results across different embodiments and scenarios. Cloth manipula- tion presents additional challenges due to the complex object configuration space. Traditional cloth manipulation papers do not have well defined metrics to evaluate the success of a task or the quality of the result, and are tailored to each evaluation. In this paper we propose to evaluate cloth manipulation seg- menting a task into steps that can be evaluated independently, and to study how their success measures influence in the next segment and relate to task as a whole. In particular, we study a popular task such as placing a cloth flat on a table. We propose a benchmark with simple but continuous evaluation metrics that explore the influence of grasp location into the quality of the task. Our results show that grasp location doesn’t need to be precise on corners, that quality measures focused on evaluating different cloth parts can enlighten issues to solve and that success definition of a segment has to consider its influence on the ability to perform successfully the next segment of action.This work receives funding from the Spanish State Research Agency through the BURG project (CHIST-ERA - PCIN2019-103447) and the Mar ́ıa de Maeztu Seal of Excellence to IRI (MDM-2016-0656)Peer ReviewedPostprint (author's final draft

    Bridging Action Space Mismatch in Learning from Demonstrations

    Full text link
    Learning from demonstrations (LfD) methods guide learning agents to a desired solution using demonstrations from a teacher. While some LfD methods can handle small mismatches in the action spaces of the teacher and student, here we address the case where the teacher demonstrates the task in an action space that can be substantially different from that of the student -- thereby inducing a large action space mismatch. We bridge this gap with a framework, Morphological Adaptation in Imitation Learning (MAIL), that allows training an agent from demonstrations by other agents with significantly different morphologies (from the student or each other). MAIL is able to learn from suboptimal demonstrations, so long as they provide some guidance towards a desired solution. We demonstrate MAIL on challenging household cloth manipulation tasks and introduce a new DRY CLOTH task -- cloth manipulation in 3D task with obstacles. In these tasks, we train a visual control policy for a robot with one end-effector using demonstrations from a simulated agent with two end-effectors. MAIL shows up to 27% improvement over LfD and non-LfD baselines. It is deployed to a real Franka Panda robot, and can handle multiple variations in cloth properties (color, thickness, size, material) and pose (rotation and translation). We further show generalizability to transfers from n-to-m end-effectors, in the context of a simple rearrangement task

    A 3D descriptor to detect task-oriented grasping points in clothing

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Manipulating textile objects with a robot is a challenging task, especially because the garment perception is difficult due to the endless configurations it can adopt, coupled with a large variety of colors and designs. Most current approaches follow a multiple re-grasp strategy, in which clothes are sequentially grasped from different points until one of them yields a recognizable configuration. In this work we propose a method that combines 3D and appearance information to directly select a suitable grasping point for the task at hand, which in our case consists of hanging a shirt or a polo shirt from a hook. Our method follows a coarse-to-fine approach in which, first, the collar of the garment is detected and, next, a grasping point on the lapel is chosen using a novel 3D descriptor. In contrast to current 3D descriptors, ours can run in real time, even when it needs to be densely computed over the input image. Our central idea is to take advantage of the structured nature of range images that most depth sensors provide and, by exploiting integral imaging, achieve speed-ups of two orders of magnitude with respect to competing approaches, while maintaining performance. This makes it especially adequate for robotic applications as we thoroughly demonstrate in the experimental section.Peer ReviewedPostprint (author's final draft

    Robotic Fabric Flattening with Wrinkle Direction Detection

    Full text link
    Deformable Object Manipulation (DOM) is an important field of research as it contributes to practical tasks such as automatic cloth handling, cable routing, surgical operation, etc. Perception is considered one of the major challenges in DOM due to the complex dynamics and high degree of freedom of deformable objects. In this paper, we develop a novel image-processing algorithm based on Gabor filters to extract useful features from cloth, and based on this, devise a strategy for cloth flattening tasks. We evaluate the overall framework experimentally, and compare it with three human operators. The results show that our algorithm can determine the direction of wrinkles on the cloth accurately in the simulation as well as the real robot experiments. Besides, the robot executing the flattening tasks using the dewrinkling strategy given by our algorithm achieves satisfying performance compared to other baseline methods. The experiment video is available on https://sites.google.com/view/robotic-fabric-flattening/hom

    Bimanual robot control for surface treatment tasks

    Full text link
    This is an Author's Accepted Manuscript of an article published in Alberto García, J. Ernesto Solanes, Luis Gracia, Pau Muñoz-Benavent, Vicent Girbés-Juan & Josep Tornero (2022) Bimanual robot control for surface treatment tasks, International Journal of Systems Science, 53:1, 74-107, DOI: 10.1080/00207721.2021.1938279 [copyright Taylor & Francis], available online at: http://www.tandfonline.com/10.1080/00207721.2021.1938279[EN] This work develops a method to perform surface treatment tasks using a bimanual robotic system, i.e. two robot arms cooperatively performing the task. In particular, one robot arm holds the work-piece while the other robot arm has the treatment tool attached to its end-effector. Moreover, the human user teleoperates all the six coordinates of the former robot arm and two coordinates of the latter robot arm, i.e. the teleoperator can move the treatment tool on the plane given by the work- piece surface. Furthermore, a force sensor attached to the treatment tool is used to automatically attain the desired pressure between the tool and the workpiece and to automatically keep the tool orientation orthogonal to the workpiece surface. In addition, to assist the human user during the teleoperation, several constraints are defined for both robot arms in order to avoid exceeding the allowed workspace, e.g. to avoid collisions with other objects in the environment. The theory used in this work to develop the bimanual robot control relies on sliding mode control and task prioritisation. Finally, the feasibility and effectiveness of the method are shown through experimental results using two robot arms.This work was supported by Generalitat Valenciana [grant numbers ACIF/2019/007 and GV/2021/181] and Spanish Ministry of Science and Innovation [grant number PID2020117421RB-C21].García-Fernández, A.; Solanes, JE.; Gracia Calandin, LI.; Muñoz-Benavent, P.; Girbés-Juan, V.; Tornero, J. (2022). Bimanual robot control for surface treatment tasks. International Journal of Systems Science. 53(1):74-107. https://doi.org/10.1080/00207721.2021.19382797410753
    corecore