38,292 research outputs found

    Active End-Effector Pose Selection for Tactile Object Recognition through Monte Carlo Tree Search

    Full text link
    This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline.Comment: Accepted to International Conference on Intelligent Robots and Systems (IROS) 201

    Active End-Effector Pose Selection for Tactile Object Recognition through Monte Carlo Tree Search

    Full text link
    This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline.Comment: Accepted to International Conference on Intelligent Robots and Systems (IROS) 201

    Tactile Mapping and Localization from High-Resolution Tactile Imprints

    Full text link
    This work studies the problem of shape reconstruction and object localization using a vision-based tactile sensor, GelSlim. The main contributions are the recovery of local shapes from contact, an approach to reconstruct the tactile shape of objects from tactile imprints, and an accurate method for object localization of previously reconstructed objects. The algorithms can be applied to a large variety of 3D objects and provide accurate tactile feedback for in-hand manipulation. Results show that by exploiting the dense tactile information we can reconstruct the shape of objects with high accuracy and do on-line object identification and localization, opening the door to reactive manipulation guided by tactile sensing. We provide videos and supplemental information in the project's website http://web.mit.edu/mcube/research/tactile_localization.html.Comment: ICRA 2019, 7 pages, 7 figures. Website: http://web.mit.edu/mcube/research/tactile_localization.html Video: https://youtu.be/uMkspjmDbq

    The Recurrent Model of Bodily Spatial Phenomenology

    Get PDF
    In this paper, we introduce and defend the recurrent model for understanding bodily spatial phenomenology. While Longo, AzanĢƒoĢn and Haggard (2010) propose a bottom-up model, BermuĢdez (2017) emphasizes the top-down aspect of the information processing loop. We argue that both are only half of the story. Section 1 intro- duces what the issues are. Section 2 starts by explaining why the top- down, descending direction is necessary with the illustration from the ā€˜body-based tactile rescalingā€™ paradigm (de Vignemont, Ehrsson and Haggard, 2005). It then argues that the bottom-up, ascending direction is also necessary, and substantiates this view with recent research on skin space and tactile field (Haggard et al., 2017). Section 3 discusses the modelā€™s application to body ownership and bodily self-representation. Implications also extend to topics such as sense modality individuation (Macpherson, 2011), the constancy- based view of perception (Burge, 2010), and the perception/cognition divide (Firestone and Scholl, 2016)

    Functional and structural brain differences associated with mirror-touch synaesthesia

    Get PDF
    Observing touch is known to activate regions of the somatosensory cortex but the interpretation of this finding is controversial (e.g. does it reflect the simulated action of touching or the simulated reception of touch?). For most people, observing touch is not linked to reported experiences of feeling touch but in some people it is (mirror-touch synaesthetes). We conducted an fMRI study in which participants (mirror-touch synaesthetes, controls) watched movies of stimuli (face, dummy, object) being touched or approached. In addition we examined whether mirror touch synaesthesia is associated with local changes of grey and white matter volume in the brain using VBM (voxel-based morphometry). Both synaesthetes and controls activated the somatosensory system (primary and secondary somatosensory cortices, SI and SII) when viewing touch, and the same regions were activated (by a separate localiser) when feeling touch ā€” i.e. there is a mirror system for touch. However, when comparing the two groups, we found evidence that SII seems to play a particular important role in mirror-touch synaesthesia: in synaesthetes, but not in controls, posterior SII was active for watching touch to a face (in addition to SI and posterior temporal lobe); activity in SII correlated with subjective intensity measures of mirror-touch synaesthesia (taken outside the scanner), and we observed an increase in grey matter volume within the SII of the synaesthetes' brains. In addition, the synaesthetes showed hypo-activity when watching touch to a dummy in posterior SII. We conclude that the secondary somatosensory cortex has a key role in this form of synaesthesia
    • ā€¦
    corecore