3 research outputs found

    Brain-Inspired Coding of Robot Body Schema Through Visuo-Motor Integration of Touched Events

    Get PDF
    International audienceRepresenting objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, armor or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand-and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space

    Neural learning of the topographic tactile sensory information of an artificial skin through a self-organizing map

    No full text
    International audienceThe sense of touch is considered as an essential feature for robots in order to improve the quality of their physical and social interactions. For instance, tactile devices have to be fast enough to interact in real-time, robust against noise to process rough sensory information as well as adaptive to represent the structure and topography of a tactile sensor itself - i.e., the shape of the sensor surface and its dynamic resolution. In this paper, we conduct experiments with a self-organizing map (SOM) neural network that adapts to the structure of a tactile sheet and spatial resolution of the input tactile device; this adaptation is faster and more robust against noise than image reconstruction techniques based on Electrical Impedance Tomography (EIT). Other advantages of this bio-inspired reconstruction algorithm are its simple mathematical formulation and the ability to self-calibrate its topographical organization without any a priori information about the input dynamics. Our results show that the spatial patterns of simple and multiple contact points can be acquired and localized with enough speed and precision for pattern recognition tasks during physical contact
    corecore