97 research outputs found

    RAMP: a benchmark for evaluating robotic assembly manipulation and planning

    Get PDF
    We introduce RAMP, an open-source robotics benchmark inspired by real-world industrial assembly tasks. RAMP consists of beams that a robot must assemble into specified goal configurations using pegs as fasteners. As such, it assesses planning and execution capabilities, and poses challenges in perception, reasoning, manipulation, diagnostics, fault recovery, and goal parsing. RAMP has been designed to be accessible and extensible. Parts are either 3D printed or otherwise constructed from materials that are readily obtainable. The design of parts and detailed instructions are publicly available. In order to broaden community engagement, RAMP incorporates fixtures such as April Tags which enable researchers to focus on individual sub-tasks of the assembly challenge if desired. We provide a full digital twin as well as rudimentary baselines to enable rapid progress. Our vision is for RAMP to form the substrate for a community-driven endeavour that evolves as capability matures

    Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects

    Get PDF
    Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe

    Learning to Grasp Clothing Structural Regions for Garment Manipulation Tasks

    Full text link
    When performing cloth-related tasks, such as garment hanging, it is often important to identify and grasp certain structural regions -- a shirt's collar as opposed to its sleeve, for instance. However, due to cloth deformability, these manipulation activities, which are essential in domestic, health care, and industrial contexts, remain challenging for robots. In this paper, we focus on how to segment and grasp structural regions of clothes to enable manipulation tasks, using hanging tasks as case study. To this end, a neural network-based perception system is proposed to segment a shirt's collar from areas that represent the rest of the scene in a depth image. With a 10-minute video of a human manipulating shirts to train it, our perception system is capable of generalizing to other shirts regardless of texture as well as to other types of collared garments. A novel grasping strategy is then proposed based on the segmentation to determine grasping pose. Experiments demonstrate that our proposed grasping strategy achieves 92\%, 80\%, and 50\% grasping success rates with one folded garment, one crumpled garment and three crumpled garments, respectively. Our grasping strategy performs considerably better than tested baselines that do not take into account the structural nature of the garments. With the proposed region segmentation and grasping strategy, challenging garment hanging tasks are successfully implemented using an open-loop control policy. Supplementary material is available at https://sites.google.com/view/garment-hangingComment: Accepted by IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023

    Humanoid Robotic Manipulation Benchmarking and Bimanual Manipulation Workspace Analysis

    Get PDF
    The growing adoption of robots for new applications has led to the use of robots in human environments for human-like tasks, applications well-suited to humanoid robots as they are designed to move like a human and operate in similar environments. However, a user must decide which robot and control algorithm is best suited to the task, motivating the need for standardized performance comparison through benchmarking. Typical humanoid robotic scenarios in many household and industrial tasks involve manipulation of objects with two hands, bimanual manipulation. Understanding how these can be performed in the humanoid’s workspace is especially challenging due to the highly constrained nature due to grasp and stability requirements, but very important for introducing humanoid robots into human environments for human-like tasks. The first topic this thesis focuses on is benchmarking manipulation for humanoid robotics. The evaluation of humanoid manipulation can be considered for whole-body manipulation, manipulation while standing and remaining balanced, or loco-manipulation, taking steps during manipulation. As part of the EUROBENCH project, which aims to develop a unified benchmarking framework for robotic systems performing locomotion tasks, benchmarks for whole-body manipulation and loco-manipulation are proposed consisting of standardized test beds, comprehensive experimental protocols, and insightful key performance indicators. For each of these benchmarks, partial initial benchmarks are performed to begin evaluating the difference in performance of the University of Waterloo’s REEM- C, “Seven”, using two different motion generation and control strategies. These partial benchmarks showed trade-offs in speed and efficiency for placement accuracy. The second topic of interest is bimanual manipulation workspace analysis of humanoid robots. To evaluate the ability of a humanoid robot to bimanually manipulate a box while remaining balanced, a new metric for combined manipulability-stability is developed based on the volume of the manipulability ellipsoid and the distance of the capture point from the edge of the support polygon. Using this metric, visualizations of the workspace are performed for the following scenarios: when the center of mass of the humanoid has a velocity, manipulating objects of different size and mass, and manipulating objects using various grips. To examine bimanual manipulation with different fixed grasps the manipulation of two different boxes, a broom and a rolling pin are visualized to see how grip affects the feasibility and manipulability-stability quality of a task. Visualizations of REEM-C and TALOS are also performed for a general workspace and a box manipulation task to compare their workspaces as they have different kinematic structures. These visualizations provide a better understanding of how manipulability and stability are impacted in a bimanual manipulation scenario

    Knowledge representation to enable high-level planning in cloth manipulation tasks

    Get PDF
    Cloth manipulation is very relevant for domestic robotic tasks, but it presents many challenges due to the complexity of representing, recognizing and predicting the behaviour of cloth under manipulation. In this work, we propose a generic, compact and simplified representation of the states of cloth manipulation that allows for representing tasks as sequences of states and transitions semantically. We also define a Cloth Manipulation Graph that encodes all the strategies to accomplish a task. Our novel representation is used to encode two different cloth manipulation tasks, learned from an experiment with human subjects manipulating clothes with video data. We show how our simplified representation allows to obtain a map of meaningful steps that can serve to describe cloth manipulation tasks as domain models in PDDL, enabling high-level planning. Finally, we discuss on the existing skills that could enable the sensory motor grounding and the low-level execution of the plan.Peer ReviewedPostprint (published version

    Trying to Grasp a Sketch of a Brain for Grasping

    Get PDF
    Ritter H, Haschke R, Steil JJ. Trying to Grasp a Sketch of a Brain for Grasping. In: Sendhoff B, ed. Creating Brain-Like Intelligence. Lecture Notes in Artificial Intelligence; 5436. Berlin, Heidelberg: Springer; 2009: 84-102

    Differentiable Robot Neural Distance Function for Adaptive Grasp Synthesis on a Unified Robotic Arm-Hand System

    Full text link
    Grasping is a fundamental skill for robots to interact with their environment. While grasp execution requires coordinated movement of the hand and arm to achieve a collision-free and secure grip, many grasp synthesis studies address arm and hand motion planning independently, leading to potentially unreachable grasps in practical settings. The challenge of determining integrated arm-hand configurations arises from its computational complexity and high-dimensional nature. We address this challenge by presenting a novel differentiable robot neural distance function. Our approach excels in capturing intricate geometry across various joint configurations while preserving differentiability. This innovative representation proves instrumental in efficiently addressing downstream tasks with stringent contact constraints. Leveraging this, we introduce an adaptive grasp synthesis framework that exploits the full potential of the unified arm-hand system for diverse grasping tasks. Our neural joint space distance function achieves an 84.7% error reduction compared to baseline methods. We validated our approaches on a unified robotic arm-hand system that consists of a 7-DoF robot arm and a 16-DoF multi-fingered robotic hand. Results demonstrate that our approach empowers this high-DoF system to generate and execute various arm-hand grasp configurations that adapt to the size of the target objects while ensuring whole-body movements to be collision-free.Comment: Under revie

    Bimanual robot control for surface treatment tasks

    Full text link
    This is an Author's Accepted Manuscript of an article published in Alberto García, J. Ernesto Solanes, Luis Gracia, Pau Muñoz-Benavent, Vicent Girbés-Juan & Josep Tornero (2022) Bimanual robot control for surface treatment tasks, International Journal of Systems Science, 53:1, 74-107, DOI: 10.1080/00207721.2021.1938279 [copyright Taylor & Francis], available online at: http://www.tandfonline.com/10.1080/00207721.2021.1938279[EN] This work develops a method to perform surface treatment tasks using a bimanual robotic system, i.e. two robot arms cooperatively performing the task. In particular, one robot arm holds the work-piece while the other robot arm has the treatment tool attached to its end-effector. Moreover, the human user teleoperates all the six coordinates of the former robot arm and two coordinates of the latter robot arm, i.e. the teleoperator can move the treatment tool on the plane given by the work- piece surface. Furthermore, a force sensor attached to the treatment tool is used to automatically attain the desired pressure between the tool and the workpiece and to automatically keep the tool orientation orthogonal to the workpiece surface. In addition, to assist the human user during the teleoperation, several constraints are defined for both robot arms in order to avoid exceeding the allowed workspace, e.g. to avoid collisions with other objects in the environment. The theory used in this work to develop the bimanual robot control relies on sliding mode control and task prioritisation. Finally, the feasibility and effectiveness of the method are shown through experimental results using two robot arms.This work was supported by Generalitat Valenciana [grant numbers ACIF/2019/007 and GV/2021/181] and Spanish Ministry of Science and Innovation [grant number PID2020117421RB-C21].García-Fernández, A.; Solanes, JE.; Gracia Calandin, LI.; Muñoz-Benavent, P.; Girbés-Juan, V.; Tornero, J. (2022). Bimanual robot control for surface treatment tasks. International Journal of Systems Science. 53(1):74-107. https://doi.org/10.1080/00207721.2021.19382797410753

    Bimanual robot control for surface treatment tasks

    Get PDF
    This work develops a method to perform surface treatment tasks using a bimanual robotic system, i.e. two robot arms cooperatively performing the task. In particular, one robot arm holds the workpiece while the other robot arm has the treatment tool attached to its end-effector. Moreover, the human user teleoperates all the six coordinates of the former robot arm and two coordinates of the latter robot arm, i.e. the teleoperator can move the treatment tool on the plane given by the workpiece surface. Furthermore, a force sensor attached to the treatment tool is used to automatically attain the desired pressure between the tool and the workpiece and to automatically keep the tool orientation orthogonal to the workpiece surface. In addition, to assist the human user during the teleoperation, several constraints are defined for both robot arms in order to avoid exceeding the allowed workspace, e.g. to avoid collisions with other objects in the environment. The theory used in this work to develop the bimanual robot control relies on sliding mode control and task prioritisation. Finally, the feasibility and effectiveness of the method are shown through experimental results using two robot arms
    corecore