6,778 research outputs found

    The Anthropomorphic Hand Assessment Protocol (AHAP)

    Get PDF
    The progress in the development of anthropomorphic hands for robotic and prosthetic applications has not been followed by a parallel development of objective methods to evaluate their performance. The need for benchmarking in grasping research has been recognized by the robotics community as an important topic. In this study we present the Anthropomorphic Hand Assessment Protocol (AHAP) to address this need by providing a measure for quantifying the grasping ability of artificial hands and comparing hand designs. To this end, the AHAP uses 25 objects from the publicly available Yale-CMU-Berkeley Object and Model Set thereby enabling replicability. It is composed of 26 postures/tasks involving grasping with the eight most relevant human grasp types and two non-grasping postures. The AHAP allows to quantify the anthropomorphism and functionality of artificial hands through a numerical Grasping Ability Score (GAS). The AHAP was tested with different hands, the first version of the hand of the humanoid robot ARMAR-6 with three different configurations resulting from attachment of pads to fingertips and palm as well as the two versions of the KIT Prosthetic Hand. The benchmark was used to demonstrate the improvements of these hands in aspects like the grasping surface, the grasp force and the finger kinematics. The reliability, consistency and responsiveness of the benchmark have been statistically analyzed, indicating that the AHAP is a powerful tool for evaluating and comparing different artificial hand designs

    More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

    Full text link
    For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this paper, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors, nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6,450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at (i) estimating grasp adjustment outcomes, (ii) selecting efficient grasp adjustments for quick grasping, and (iii) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL). Website: https://sites.google.com/view/more-than-a-feelin

    Densely Supervised Grasp Detector (DSGD)

    Full text link
    This paper presents Densely Supervised Grasp Detector (DSGD), a deep learning framework which combines CNN structures with layer-wise feature fusion and produces grasps and their confidence scores at different levels of the image hierarchy (i.e., global-, region-, and pixel-levels). % Specifically, at the global-level, DSGD uses the entire image information to predict a grasp. At the region-level, DSGD uses a region proposal network to identify salient regions in the image and predicts a grasp for each salient region. At the pixel-level, DSGD uses a fully convolutional network and predicts a grasp and its confidence at every pixel. % During inference, DSGD selects the most confident grasp as the output. This selection from hierarchically generated grasp candidates overcomes limitations of the individual models. % DSGD outperforms state-of-the-art methods on the Cornell grasp dataset in terms of grasp accuracy. % Evaluation on a multi-object dataset and real-world robotic grasping experiments show that DSGD produces highly stable grasps on a set of unseen objects in new environments. It achieves 97% grasp detection accuracy and 90% robotic grasping success rate with real-time inference speed
    • …
    corecore