161 research outputs found

    Rapid enhancement of touch from non-informative vision of the hand

    Get PDF
    Processing in one sensory modality may modulate processing in another. Here we investigate how simply viewing the hand can influence the sense of touch. Previous studies showed that non-informative vision of the hand enhances tactile acuity, relative to viewing an object at the same location. However, it remains unclear whether this Visual Enhancement of Touch (VET) involves a phasic enhancement of tactile processing circuits triggered by the visual event of seeing the hand, or more prolonged, tonic neuroplastic changes, such as recruitment of additional cortical areas for tactile processing. We recorded somatosensory evoked potentials (SEPs) evoked by electrical stimulation of the right middle finger, both before and shortly after viewing either the right hand, or a neutral object presented via a mirror. Crucially, and unlike prior studies, our visual exposures were unpredictable and brief, in addition to being non-informative about touch. Viewing the hand, as opposed to viewing an object, enhanced tactile spatial discrimination measured using grating orientation judgements, and also the P50 SEP component, which has been linked to early somatosensory cortical processing. This was a trial-specific, phasic effect, occurring within a few seconds of each visual onset, rather than an accumulating, tonic effect. Thus, somatosensory cortical modulation can be triggered even by a brief, non-informative glimpse of one’s hand. Such rapid multisensory modulation reveals novel aspects of the specialised brain systems for functionally representing the body

    Representation of Neck Velocity and Neck–Vestibular Interactions in Pursuit Neurons in the Simian Frontal Eye Fields

    Get PDF
    The smooth pursuit system must interact with the vestibular system to maintain the accuracy of eye movements in space (i.e., gaze-movement) during head movement. Normally, the head moves on the stationary trunk. Vestibular signals cannot distinguish whether the head or whole body is moving. Neck proprioceptive inputs provide information about head movements relative to the trunk. Previous studies have shown that the majority of pursuit neurons in the frontal eye fields (FEF) carry visual information about target velocity, vestibular information about whole-body movements, and signal eye- or gaze-velocity. However, it is unknown whether FEF neurons carry neck proprioceptive signals. By passive trunk-on-head rotation, we tested neck inputs to FEF pursuit neurons in 2 monkeys. The majority of FEF pursuit neurons tested that had horizontal preferred directions (87%) responded to horizontal trunk-on-head rotation. The modulation consisted predominantly of velocity components. Discharge modulation during pursuit and trunk-on-head rotation added linearly. During passive head-on-trunk rotation, modulation to vestibular and neck inputs also added linearly in most neurons, although in half of gaze-velocity neurons neck responses were strongly influenced by the context of neck rotation. Our results suggest that neck inputs could contribute to representing eye- and gaze-velocity FEF signals in trunk coordinates

    Fronto-parietal brain responses to visuotactile congruence in an anatomical reference frame

    Get PDF
    Spatially and temporally congruent visuotactile stimulation of a fake hand together with one’s real hand may result in an illusory self-attribution of the fake hand. Although this illusion relies on a representation of the two touched body parts in external space, there is tentative evidence that, for the illusion to occur, the seen and felt touches also need to be congruent in an anatomical reference frame. We used functional magnetic resonance imaging and a somatotopical, virtual reality-based setup to isolate the neuronal basis of such a comparison. Participants’ index or little finger was synchronously touched with the index or little finger of a virtual hand, under congruent or incongruent orientations of the real and virtual hands. The left ventral premotor cortex responded significantly more strongly to visuotactile co- stimulation of the same versus different fingers of the virtual and real hand. Conversely, the left anterior intraparietal sulcus responded significantly more strongly to co-stimulation of different versus same fingers. Both responses were independent of hand orientation congruence and of spatial congruence of the visuotactile stimuli. Our results suggest that fronto- parietal areas previously associated with multisensory processing within peripersonal space and with tactile remapping evaluate the congruence of visuotactile stimulation on the body according to an anatomical reference frame

    Multisensory Integration in Self Motion Perception

    Get PDF
    Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one's position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities

    Multiple foci of spatial attention in multimodal working memory

    Get PDF
    The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions.Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA / CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile / visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. This finding indicates that distinct foci of tactile and visual spatial attention were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are subject to modality-specific control processes, which can operate simultaneously and largely independently of each other

    If I Were You: Perceptual Illusion of Body Swapping

    Get PDF
    The concept of an individual swapping his or her body with that of another person has captured the imagination of writers and artists for decades. Although this topic has not been the subject of investigation in science, it exemplifies the fundamental question of why we have an ongoing experience of being located inside our bodies. Here we report a perceptual illusion of body-swapping that addresses directly this issue. Manipulation of the visual perspective, in combination with the receipt of correlated multisensory information from the body was sufficient to trigger the illusion that another person's body or an artificial body was one's own. This effect was so strong that people could experience being in another person's body when facing their own body and shaking hands with it. Our results are of fundamental importance because they identify the perceptual processes that produce the feeling of ownership of one's body

    The free-energy self:A predictive coding account of self-recognition

    Get PDF
    Recognising and representing one's self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one's body is processed in a Bayesian manner as the most likely to be "me". Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up "surprise" signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest

    Does the Integration of Haptic and Visual Cues Reduce the Effect of a Biased Visual Reference Frame on the Subjective Head Orientation?

    Get PDF
    The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect.Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks
    corecore