5,214 research outputs found

    More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

    Full text link
    For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this paper, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors, nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6,450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at (i) estimating grasp adjustment outcomes, (ii) selecting efficient grasp adjustments for quick grasping, and (iii) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL). Website: https://sites.google.com/view/more-than-a-feelin

    Subject-specific finite element modelling of the human hand complex : muscle-driven simulations and experimental validation

    Get PDF
    This paper aims to develop and validate a subject-specific framework for modelling the human hand. This was achieved by combining medical image-based finite element modelling, individualized muscle force and kinematic measurements. Firstly, a subject-specific human hand finite element (FE) model was developed. The geometries of the phalanges, carpal bones, wrist bones, ligaments, tendons, subcutaneous tissue and skin were all included. The material properties were derived from in-vivo and in-vitro experiment results available in the literature. The boundary and loading conditions were defined based on the kinematic data and muscle forces of a specific subject captured from the in-vivo grasping tests. The predicted contact pressure and contact area were in good agreement with the in-vivo test results of the same subject, with the relative errors for the contact pressures all being below 20%. Finally, sensitivity analysis was performed to investigate the effects of important modelling parameters on the predictions. The results showed that contact pressure and area were sensitive to the material properties and muscle forces. This FE human hand model can be used to make a detailed and quantitative evaluation into biomechanical and neurophysiological aspects of human hand contact during daily perception and manipulation. The findings can be applied to the design of the bionic hands or neuro-prosthetics in the future

    Capturing Hands in Action using Discriminative Salient Points and Physics Simulation

    Full text link
    Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.Comment: Accepted for publication by the International Journal of Computer Vision (IJCV) on 16.02.2016 (submitted on 17.10.14). A combination into a single framework of an ECCV'12 multicamera-RGB and a monocular-RGBD GCPR'14 hand tracking paper with several extensions, additional experiments and detail
    • …
    corecore