57 research outputs found

    If I Were You: Perceptual Illusion of Body Swapping

    Get PDF
    The concept of an individual swapping his or her body with that of another person has captured the imagination of writers and artists for decades. Although this topic has not been the subject of investigation in science, it exemplifies the fundamental question of why we have an ongoing experience of being located inside our bodies. Here we report a perceptual illusion of body-swapping that addresses directly this issue. Manipulation of the visual perspective, in combination with the receipt of correlated multisensory information from the body was sufficient to trigger the illusion that another person's body or an artificial body was one's own. This effect was so strong that people could experience being in another person's body when facing their own body and shaking hands with it. Our results are of fundamental importance because they identify the perceptual processes that produce the feeling of ownership of one's body

    When Right Feels Left: Referral of Touch and Ownership between the Hands

    Get PDF
    Feeling touch on a body part is paradigmatically considered to require stimulation of tactile afferents from the body part in question, at least in healthy non-synaesthetic individuals. In contrast to this view, we report a perceptual illusion where people experience “phantom touches” on a right rubber hand when they see it brushed simultaneously with brushes applied to their left hand. Such illusory duplication and transfer of touch from the left to the right hand was only elicited when a homologous (i.e., left and right) pair of hands was brushed in synchrony for an extended period of time. This stimulation caused the majority of our participants to perceive the right rubber hand as their own and to sense two distinct touches – one located on the right rubber hand and the other on their left (stimulated) hand. This effect was supported by quantitative subjective reports in the form of questionnaires, behavioral data from a task in which participants pointed to the felt location of their right hand, and physiological evidence obtained by skin conductance responses when threatening the model hand. Our findings suggest that visual information augments subthreshold somatosensory responses in the ipsilateral hemisphere, thus producing a tactile experience from the non-stimulated body part. This finding is important because it reveals a new bilateral multisensory mechanism for tactile perception and limb ownership

    Does the Integration of Haptic and Visual Cues Reduce the Effect of a Biased Visual Reference Frame on the Subjective Head Orientation?

    Get PDF
    The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect.Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks

    Rubber Hands Feel Touch, but Not in Blind Individuals

    Get PDF
    Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates

    Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction

    Get PDF
    Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion

    Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    Get PDF
    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement

    The vestibular system modulates the contributions of head and torso to egocentric spatial judgements

    Get PDF
    Egocentric representations allow us to describe the external world as experienced from an individual’s bodily location. We recently developed a novel method of quantifying the weight given to different body parts in egocentric judgments (the Misalignment Paradigm). We found that both head and torso contribute to simple alter-egocentric spatial judgments. We hypothesised that artificial stimulation of the vestibular system would provide a head-related signal, which might affect the weighting given to the head in egocentric spatial judgments. Bipolar Galvanic Vestibular Stimulation (GVS) was applied during the Misalignment Paradigm. A sham stimulation condition was also included to control for non-specific effects. Our data show that the weight given to the head was increased during left anodal and right cathodal GVS, compared to the opposite GVS polarity and sham stimulation. That is, the polarity of GVS, which preferentially activates vestibular areas in the right cerebral hemisphere, influenced the relative weightings of head and torso in spatial judgments
    corecore