22 research outputs found

    The rubber hand illusion in microgravity and water immersion

    Get PDF
    Our body has evolved in terrestrial gravity and altered gravitational conditions may affect the sense of body ownership (SBO). By means of the rubber hand illusion (RHI), we investigated the SBO during water immersion and parabolic flights, where unconventional gravity is experienced. Our results show that unconventional gravity conditions remodulate the relative weights of visual, proprioceptive, and vestibular inputs favoring vision, thus inducing an increased RHI susceptibility

    Peripersonal space representation develops independently from visual experience

    Get PDF
    Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-To-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects' reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects' reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one's own and others' peripersonal space representation

    The effects of visual control and distance in modulating peripersonal spatial representation

    Get PDF
    In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets

    The Remapping of Time by Active Tool-Use

    Get PDF
    Multiple, action-based space representations are each based on the extent to which action is possible toward a specific sector of space, such as near/reachable and far/unreachable. Studies on tool-use revealed how the boundaries between these representations are dynamic. Space is not only multidimensional and dynamic, but it is also known for interacting with other dimensions of magnitude, such as time. However, whether time operates on similar action-driven multiple representations and whether it can be modulated by tool-use is yet unknown. To address these issues, healthy participants performed a time bisection task in two spatial positions (near and far space) before and after an active tool-use training, which consisted of performing goal-directed actions holding a tool with their right hand (Experiment 1). Before training, perceived stimuli duration was influenced by their spatial position defined by action. Hence, a dissociation emerged between near/reachable and far/unreachable space. Strikingly, this dissociation disappeared after the active tool-use training since temporal stimuli were now perceived as nearer. The remapping was not found when a passive tool-training was executed (Experiment 2) or when the active tool-training was performed with participants’ left hand (Experiment 3). Moreover, no time remapping was observed following an equivalent active hand-training but without a tool (Experiment 4). Taken together, our findings reveal that time processing is based on action-driven multiple representations. The dynamic nature of these representations is demonstrated by the remapping of time, which is action- and effector-dependent
    corecore