7,807 research outputs found

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

    Full text link
    For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this paper, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors, nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6,450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at (i) estimating grasp adjustment outcomes, (ii) selecting efficient grasp adjustments for quick grasping, and (iii) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL). Website: https://sites.google.com/view/more-than-a-feelin

    Novel Tactile-SIFT Descriptor for Object Shape Recognition

    Get PDF
    Using a tactile array sensor to recognize an object often requires multiple touches at different positions. This process is prone to move or rotate the object, which inevitably increases difficulty in object recognition. To cope with the unknown object movement, this paper proposes a new tactile-SIFT descriptor to extract features in view of gradients in the tactile image to represent objects, to allow the features being invariant to object translation and rotation. The tactile-SIFT segments a tactile image into overlapping subpatches, each of which is represented using a dn-dimensional gradient vector, similar to the classic SIFT descriptor. Tactile-SIFT descriptors obtained from multiple touches form a dictionary of k words, and the bag-of-words method is then used to identify objects. The proposed method has been validated by classifying 18 real objects with data from an off-the-shelf tactile sensor. The parameters of the tactile-SIFT descriptor, including the dimension size dn and the number of subpatches sp, are studied. It is found that the optimal performance is obtained using an 8-D descriptor with three subpatches, taking both the classification accuracy and time efficiency into consideration. By employing tactile-SIFT, a recognition rate of 91.33% has been achieved with a dictionary size of 50 clusters using only 15 touches
    • …
    corecore