2,295 research outputs found

    Perceptual compasses: spatial navigation in multisensory environments

    Get PDF
    Moving through space is a crucial activity in daily human life. The main objective of my Ph.D. project consisted of investigating how people exploit the multisensory sources of information available (vestibular, visual, auditory) to efficiently navigate. Specifically, my Ph.D. aimed at i) examining the multisensory integration mechanisms underlying spatial navigation; ii) establishing the crucial role of vestibular signals in spatial encoding and processing, and its interaction with environmental landmarks; iii) providing the neuroscientific basis to develop tailored assessment protocols and rehabilitation procedures to enhance orientation and mobility based on the integration of different sensory modalities, especially addressed to improve the compromised navigational performance of visually impaired (VI) people. To achieve these aims, we conducted behavioral experiments on adult participants, including psychophysics procedures, galvanic stimulation, and modeling. In particular, the experiments involved active spatial navigation tasks with audio-visual landmarks and selfmotion discrimination tasks with and without acoustic landmarks using a motion platform (Rotational-Translational Chair) and an acoustic virtual reality tool. Besides, we applied Galvanic Vestibular Stimulation to directly modulate signals coming from the vestibular system during behavioral tasks that involved interaction with audio-visual landmarks. In addition, when appropriate, we compared the obtained results with predictions coming from the Maximum Likelihood Estimation model, to verify the potential optimal integration between the available multisensory cues. i) Results on multisensory navigation showed a sub-group of integrators and another of non-integrators, revealing inter-individual differences in audio-visual processing while moving through the environment. Finding these idiosyncrasies in a homogeneous sample of adults emphasizes the role of individual perceptual characteristics in multisensory perception, highlighting how important it is to plan tailored rehabilitation protocols considering each individual’s perceptual preferences and experiences. ii) We also found a robust inherent overestimation bias when estimating passive self-motion stimuli. This finding shed new light on how our brain processes and elaborates the available cues building a more functional representation of the world. We also demonstrated a novel impact of the vestibular signals on the encoding of visual environmental cues without actual self-motion information. The role that vestibular inputs play in visual cues perception, and space encoding has multiple consequences on humans’ ability to functionally navigate in space and interact with environmental objects, especially when vestibular signals are impaired due to intrinsic (vestibular disorders) or environmental conditions (altered gravity, e.g. spaceflight missions). Finally, iii) the combination of the Rotational-Translational Chair and the acoustic virtual reality tool revealed a slight improvement in self-motion perception for VI people when exploiting acoustic cues. This approach shows to be a successful technique for evaluating audio-vestibular perception and improving spatial representation abilities of VI people, providing the basis to develop new rehabilitation procedures focused on multisensory perception. Overall, the findings resulting from my Ph.D. project broaden the scientific knowledge about spatial navigation in multisensory environments, yielding new insights into the exploration of the brain mechanisms associated with mobility, orientation, and locomotion abilities

    Does the Integration of Haptic and Visual Cues Reduce the Effect of a Biased Visual Reference Frame on the Subjective Head Orientation?

    Get PDF
    The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect.Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks

    Perceptual Strategies and Neuronal Underpinnings underlying Pattern Recognition through Visual and Tactile Sensory Modalities in Rats

    Get PDF
    The aim of my PhD project was to investigate multisensory perception and multimodal recognition abilities in the rat, to better understand the underlying perceptual strategies and neuronal mechanisms. I have chosen to carry out this project on the laboratory rat, for two reasons. First, the rat is a flexible and highly accessible experimental model, where it is possible to combine state-of-the-art neurophysiological approaches (such as multi-electrode neuronal recordings) with behavioral investigation of perception and (more in general) cognition. Second, extensive research concerning multimodal integration has already been conducted in this species, both at the neurophysiological and behavioral level. My thesis work has been organized in two projects: a psychophysical assessment of object categorization abilities in rats, and a neurophysiological study of neuronal tuning in the primary visual cortex of anaesthetized rats. In both experiments, unisensory (visual and tactile) and multisensory (visuo-tactile) stimulation has been used for training and testing, depending on the task. The first project has required development of a new experimental rig for the study of object categorization in rat, using solid objects, so as to be able to assess their recognition abilities under different modalities: vision, touch and both together. The second project involved an electrophysiological study of rat primary visual cortex, during visual, tactile and visuo-tactile stimulation, with the aim of understanding whether any interaction between these modalities exists, in an area that is mainly deputed to one of them. The results of both of the studies are still preliminary, but they already offer some interesting insights on the defining features of these abilities

    Early cross-modal interactions and adult human visual cortical plasticity revealed by binocular rivalry

    Get PDF
    In this research binocular rivalry is used as a tool to investigate different aspects of visual and multisensory perception. Several experiments presented here demonstrated that touch specifically interacts with vision during binocular rivalry and that the interaction likely occurs at early stages of visual processing, probably V1 or V2. Another line of research also presented here demonstrated that human adult visual cortex retains an unexpected high degree of experience-dependent plasticity by showing that a brief period of monocular deprivation produced important perceptual consequences on the dynamics of binocular rivalry, reflecting a homeostatic plasticity. In summary, this work shows that binocular rivalry is a powerful tool to investigate different aspects of visual perception and can be used to reveal unexpected properties of early visual cortex

    Behavioral, Neural, and Computational Principles of Bodily Self-Consciousness

    Get PDF
    Recent work in human cognitive neuroscience has linked self-consciousness to the processing of multisensory bodily signals (bodily self-consciousness [BSC]) in fronto-parietal cortex and more posterior temporo-parietal regions. We highlight the behavioral, neurophysiological, neuroimaging, and computational laws that subtend BSC in humans and non-human primates. We propose that BSC includes body-centered perception (hand, face, and trunk), based on the integration of proprioceptive, vestibular, and visual bodily inputs, and involves spatio-temporal mechanisms integrating multisensory bodily stimuli within peripersonal space (PPS). We develop four major constraints of BSC (proprioception, body-related visual information, PPS, and embodiment) and argue that the fronto-parietal and temporo-parietal processing of trunk-centered multisensory signals in PPS is of particular relevance for theoretical models and simulations of BSC and eventually of self-consciousness

    Persistent perceptual delay for head movement onset relative to sound onset with and without vision

    Get PDF
    Knowing when the head moves is crucial information for the central nervous system in order to maintain a veridical representation of the self in the world for perception and action. Our head is constantly in motion during everyday activities, and thus the central nervous system is challenged with determining the relative timing of multisensory events that arise from active movement of the head. The vestibular system plays an important role in the detection of head motion as well as compensatory reflexive behaviours geared to stabilizing the self and the representation of the world. Although the transduction of vestibular signals is very fast, previous studies have found that the perceived onset of an active head movement is delayed when compared to other sensory stimuli such as sound, meaning that head movement onset has to precede a sound by approximately 80ms in order to be perceived as simultaneous. However, this past research has been conducted with participants’ eyes closed. Given that most natural head movements occur with input from the visual system, could perceptual delays in head movement onset be the result of removing visual input? In the current study, we set out to examine whether the inclusion of visual information affects the perceived timing of vestibular-auditory stimulus pairs. Participants performed a series of temporal order judgment tasks between their active head movement and an auditory tone presented at various stimulus onset asynchronies. Visual information was either absent (eyes-closed) or present while either maintaining fixation on an earth or head-fixed LED target in the dark or in the light. Our results show that head movement onset has to precede a sound with eyes-closed. The results also suggest that head movement onset must still precede a sound when fixating targets in the dark with a trend for the head having to move with less lead time with visual information and with the VOR active or suppressed. Together, these results suggest perception of head movement onset is persistently delayed and is not fully resolved with full field visual input
    • …
    corecore