149 research outputs found

    Quantifying Self Perception: Multisensory Temporal Asynchrony Discrimination As A Measure of Body Ownership

    Get PDF
    There are diffuse and distinct cortical networks involved in the various aspects of body representation that organize information from multiple sensory inputs and resolve conflicts when faced with incongruent situations. This coherence is typically maintained as we maneuver around the world, as our bodies change over the years, and as we gain experience. An important aspect of a congruent representation of the body in the brain is the visual perspective in which we are able to directly view our own body. There is a clear separation of the cortical networks involved in seeing our own body and that of another person. For the projects presented in my dissertation, I used an experimental design in which participants were required to make a multisensory temporal asynchrony discrimination after self-generated movements. I measured sensitivity for visual delay detection between the movement (proprioceptive, efferent and afferent information) and the visual image of that movement under differing visual, proprioceptive, and vestibular conditions. The self-advantage is a signature of body ownership and is characterized by a significantly lower threshold for delay detection for views of the body that are considered self compared to those that are regarded as other. Overall, the results from the collection of studies suggest that the tolerance for temporally matching visual, proprioceptive and efferent copy information that informs about the perceived position of body parts depends on: whether one is viewing ones own body or someone elses; the perspective in which the body is viewed; the dominant hand; and the reliability of vestibular cues which help us situate our body in space. Further, the self-advantage provides a robust measure of body ownership. The experiments provide a window on and support for the malleable nature of the representation of the body in the brain

    The effect of GVS on path trajectory and body rotation in the absence of visual cues during a spatial navigation task

    Get PDF
    Background: The vestibular system has been shown to contribute to mechanisms of locomotion such as distance perception. Galvanic vestibular stimulation (GVS) is a tool used to perturb the vestibular system, and causes significant deviations in path trajectory during locomotion. Previous research has suggested that applying GVS during straight-line locomotion tasks is not sufficient to determine the effects of the vestibular system on locomotion. However, spatial navigation challenges one’s ability to navigate throughout the environment using idiothetic cues to constantly update one’s position. The purpose of the current study was to determine the effects of GVS on both path trajectory and body rotation during a task of spatial navigation in the absence of visual cues, and how accuracy of this task is affected by dance training. It was hypothesized that the delivery of GVS would significantly increase errors during the triangle completion task, and this increase would be more pronounced in the control participants compared to the dancers. Methods: Participants (n=34, all female, 18-30 years) were divided into two groups: controls (n=18) had no experience with sport-specific training while dancers (n=16) had previously experienced dance training (M = 15.6 years, SD = ±4.1) and were still currently training in dance (M = 11.5 hours/week, SD = ±7.3). Monofilament testing (Touch-Test Six Piece Foot Kit) was used to determine the plantar surface cutaneous sensitivity threshold and a joint angle-matching task was used to quantify the proprioceptive awareness of each individual. Participants completed trials of the triangle completion task in VR (via Oculus Rift DK2), during which they would navigate along the first two legs of one of two triangles using visual input, and then accurately navigate back to their initial position with the use of vision. GVS was delivered at three times the participant’s threshold in either the left or right direction prior to the final body rotation and until the participant reached their end position. The task was completed six times for each of the GVS conditions (with and without GVS) with the experimental GVS condition being further divided into right and left perturbation trials, for each of the two triangles, in both the right and left triangle directions, for a total of 48 trials (six trials x 2 GVS conditions x 2 triangles x 2 directions). Whole body kinematic data were collected at 60 Hz using an NDI Optotrak motion tracking system. Results: No significant differences were observed between control subjects and dancers with respect to arrival error, angular error, path variability, cutaneous sensitivity or proprioceptive awareness. However, there was a significant effect of GVS on both arrival error and angular error. Conditions without GVS had significantly smaller angular error than both conditions with GVS. In addition, GVS conditions with the perturbation in the same direction as the final body rotation had significantly greater arrival error than both the condition without GVS and with the current in the opposite direction of the final body rotation. There was no significant difference between GVS conditions in path variability during the return to the initial position. Conclusions: The significant effect of GVS on both arrival error and angular rotation demonstrates that vestibular perturbation reduced the accuracy of the triangle completion task. These findings suggest that the vestibular system plays a major role in both path trajectory and body rotation during tasks of spatial navigation in the absence of vision

    Perceptual compasses: spatial navigation in multisensory environments

    Get PDF
    Moving through space is a crucial activity in daily human life. The main objective of my Ph.D. project consisted of investigating how people exploit the multisensory sources of information available (vestibular, visual, auditory) to efficiently navigate. Specifically, my Ph.D. aimed at i) examining the multisensory integration mechanisms underlying spatial navigation; ii) establishing the crucial role of vestibular signals in spatial encoding and processing, and its interaction with environmental landmarks; iii) providing the neuroscientific basis to develop tailored assessment protocols and rehabilitation procedures to enhance orientation and mobility based on the integration of different sensory modalities, especially addressed to improve the compromised navigational performance of visually impaired (VI) people. To achieve these aims, we conducted behavioral experiments on adult participants, including psychophysics procedures, galvanic stimulation, and modeling. In particular, the experiments involved active spatial navigation tasks with audio-visual landmarks and selfmotion discrimination tasks with and without acoustic landmarks using a motion platform (Rotational-Translational Chair) and an acoustic virtual reality tool. Besides, we applied Galvanic Vestibular Stimulation to directly modulate signals coming from the vestibular system during behavioral tasks that involved interaction with audio-visual landmarks. In addition, when appropriate, we compared the obtained results with predictions coming from the Maximum Likelihood Estimation model, to verify the potential optimal integration between the available multisensory cues. i) Results on multisensory navigation showed a sub-group of integrators and another of non-integrators, revealing inter-individual differences in audio-visual processing while moving through the environment. Finding these idiosyncrasies in a homogeneous sample of adults emphasizes the role of individual perceptual characteristics in multisensory perception, highlighting how important it is to plan tailored rehabilitation protocols considering each individual’s perceptual preferences and experiences. ii) We also found a robust inherent overestimation bias when estimating passive self-motion stimuli. This finding shed new light on how our brain processes and elaborates the available cues building a more functional representation of the world. We also demonstrated a novel impact of the vestibular signals on the encoding of visual environmental cues without actual self-motion information. The role that vestibular inputs play in visual cues perception, and space encoding has multiple consequences on humans’ ability to functionally navigate in space and interact with environmental objects, especially when vestibular signals are impaired due to intrinsic (vestibular disorders) or environmental conditions (altered gravity, e.g. spaceflight missions). Finally, iii) the combination of the Rotational-Translational Chair and the acoustic virtual reality tool revealed a slight improvement in self-motion perception for VI people when exploiting acoustic cues. This approach shows to be a successful technique for evaluating audio-vestibular perception and improving spatial representation abilities of VI people, providing the basis to develop new rehabilitation procedures focused on multisensory perception. Overall, the findings resulting from my Ph.D. project broaden the scientific knowledge about spatial navigation in multisensory environments, yielding new insights into the exploration of the brain mechanisms associated with mobility, orientation, and locomotion abilities

    Balancing Interoception and Exteroception: Vestibular and Spatial Contributions to the Bodily Self

    Get PDF
    Experiencing the body as a coherent, stable, entity involves the dynamic integration of information from several internal (i.e. interoceptive) and external (i.e. exteroceptive) sensory sources, to produce a feeling that the body is mine (sense of body ownership), that I am in control (sense of agency) and I am aware of its movements (motor awareness). However, the exact contribution of these different sensory sources to self-consciousness, as well as the context in which we experience them, is still a matter of debate. This thesis aimed to investigate the neurocognitive mechanisms of body ownership, agency and motor awareness, including interoceptive (via affective touch), proprioceptive, exteroceptive (visuo-spatial) and vestibular contributions to body representation, in both healthy subjects and brain damaged patients. To examine the role of the vestibular and interoceptive systems in body ownership, a series of studies in healthy subjects was devised, using multisensory illusions (i.e. the rubber hand illusion; RHI), that involve the integration of interoceptive and exteroceptive sensory sources, and using electrical stimulation of the vestibular system (i.e. Galvanic Vestibular Stimulation; GVS). To investigate ownership, agency and motor awareness in neuropsychological patients with disorders of ownership and/or unawareness of motor deficits, behavioural manipulation of body ownership (via a rubber hand) and visual perspective (via a mirror) were tested. Finally, to explore underlying mechanisms of awareness of one’s own performance (i.e. meta-cognition), two studies were carried out in healthy subjects using behavioural manipulations of spatial reference frames (either centred on the subject, i.e. egocentric, or world-centred, i.e. allocentric). The results of these studies indicate that the vestibular system balances vision and proprioception according to contextual relevance: when there is no tactile stimulation, visual cues are stronger than proprioceptive ones (i.e. proprioceptive drifts are greater); when touch is delivered synchronously, this effect is enhanced (even more when touch is affective rather than neutral). However, when touch is only felt but not seen, the vestibular system downregulates vision in favour of proprioception (i.e. proprioceptive drifts are smaller), whilst the opposite happens when touch is only vicariously perceived via vision. Nevertheless, when the rubber hand is positioned in a non-biomechanically possible fashion, there appears to be no difference in proprioceptive drifts in comparison with anatomically plausible positions, suggesting that such rebalancing may be more related to basic multisensory integration processes underlying body representation. In patients with disorders of the self, visual cues seem to dominate over proprioceptive ones, leading to strong feelings of ownership of a rubber hand following mere exposure to it; however, the same is not true for agency, which seems to be more susceptible to changes in the environment (i.e. presence or absence of a visual feedback following attempted movement). Moreover, manipulating visual perspective using a mirror (from 1st to 3rd) seem to lead to a temporary remission of dis-ownership but not motor unawareness, suggesting that awareness may not be influenced by online changes in visual perspectives. Finally, when judging their own performance in a visuo-proprioceptive task from an egocentric rather than an allocentric perspective, healthy subjects appear less objective prospectively rather than during the task (i.e. their belief updating is biased when judging their ability to complete a task egocentrically). In sum, the work described above adds to the evidence that the sense of self derives from a complex integration of several sensory modalities, flexibly adjusting to the environment. Following brain damage, such flexibility may be impaired, even though it can be influenced by spatial perspective. Similarly, the point of reference from which we perceive stimuli affects the way we judge our own perceptual choices. Hence, the way we represent our bodily self is a dynamic process, constantly updated by exteroceptive and interoceptive incoming stimuli, regulated by the vestibular system. These findings could provide new avenues in rehabilitating disorders of the self (such as unawareness and dis-ownership)

    A Review of Electrostimulation-based Cybersickness Mitigations

    Get PDF
    With the development of consumer virtual reality (VR), people have increasing opportunities to experience cybersickness (CS) –- a kind of visuallyinduced motion sickness (MS). In view of the importance of CS mitigation (CSM), this paper reviews the methods of electrostimulation-based CSM (e-CSM), broadly categorised as either “VR-centric” or “Human-centric”. “VR-centric” refers to approaches where knowledge regarding the visual motion being experienced in VR directly affects how the neurostimulation is delivered, whereas “Human-centric” approaches focus on the inhibition or enhancement of human functions per se without knowledge of the experienced visual motion. We DIFFERENT E-found that 1) most e-CSM approaches are based on visual-vestibular sensory conflict theory –- one of the generally-accepted aetiologies of MS, 2) the majority of eCSM approaches are vestibular system-centric, either stimulating it to compensate for the mismatched vestibular sensory responses, or inhibiting it to make an artificial and temporary dysfunction in vestibular sensory organs or cortical areas, 3) Vestibular sensory organbased solutions are able to mitigate CS with immediate effect, while the real-time effect of vestibular cortical areas-based methods remains unclear, due to limited public data, 4) Based on subjective assessment, VRcentric approaches could relieve all three kinds of symptoms (nausea, oculomotor, and disorientation), which appears superior to the human-centric ones that could only alleviate one of the symptom types or just have an overall relief effect. Finally, we propose promising future research directions in the development of e-CSM

    The search for instantaneous vection: An oscillating visual prime reduces vection onset latency

    Get PDF
    2018 Palmisano, Riecke. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion ( vection ). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost

    Persistent perceptual delay for head movement onset relative to sound onset with and without vision

    Get PDF
    Knowing when the head moves is crucial information for the central nervous system in order to maintain a veridical representation of the self in the world for perception and action. Our head is constantly in motion during everyday activities, and thus the central nervous system is challenged with determining the relative timing of multisensory events that arise from active movement of the head. The vestibular system plays an important role in the detection of head motion as well as compensatory reflexive behaviours geared to stabilizing the self and the representation of the world. Although the transduction of vestibular signals is very fast, previous studies have found that the perceived onset of an active head movement is delayed when compared to other sensory stimuli such as sound, meaning that head movement onset has to precede a sound by approximately 80ms in order to be perceived as simultaneous. However, this past research has been conducted with participants’ eyes closed. Given that most natural head movements occur with input from the visual system, could perceptual delays in head movement onset be the result of removing visual input? In the current study, we set out to examine whether the inclusion of visual information affects the perceived timing of vestibular-auditory stimulus pairs. Participants performed a series of temporal order judgment tasks between their active head movement and an auditory tone presented at various stimulus onset asynchronies. Visual information was either absent (eyes-closed) or present while either maintaining fixation on an earth or head-fixed LED target in the dark or in the light. Our results show that head movement onset has to precede a sound with eyes-closed. The results also suggest that head movement onset must still precede a sound when fixating targets in the dark with a trend for the head having to move with less lead time with visual information and with the VOR active or suppressed. Together, these results suggest perception of head movement onset is persistently delayed and is not fully resolved with full field visual input

    Sensory Conflict: Effects on the Perceived Onset of Motion and Cybersickness in Virtual Reality

    Get PDF
    The perception of self-motion involves the integration of multisensory information, however there are scenarios in which the sensory feedback we receive from these different sources can conflict with one another. For example, when inside the cabin of a ship at sea or playing a game in virtual reality (VR), sensory signals for self-motion from the visual and vestibular systems may not be congruent. It has been well documented that such scenarios are associated with feelings of discomfort and alterations in our perception of motion, but the mechanisms leading to these perceptual consequences remain uncertain. The goal of this dissertation is to explore the effect of sensory conflict between vestibular and visual signals on the perception of self-motion and implications for cybersickness. Chapter Two examined the effect of sensory conflict on the perceived timing of a passive whole-body rotation paired with both congruent and incongruent visual feedback using VR. It was found that the visual signal only influenced the perception of movement onset when the direction of the visual motion did not match the expected equal and opposite response relative to physical rotation. In Chapter Three, the effect of sensory conflict between visual, vestibular and body cues on the perceived timing of visual motion was explored. The results revealed that changing the orientation of the body relative to gravity to dissociate the relationship between vestibular and body cues of upright delays the perceived onset of visual yaw rotation in VR by an additional 30ms compared to an upright posture. Lastly, Chapter Four investigated the relationship between sensory conflict and sensory reweighting through measures of cybersickness and sensory perception after exposure to VR gameplay. The results indicated that the perception of subjective vertical was significantly influenced by an intense VR experience and that sensory reweighting may play a role in this effect, along with providing a potential explanation for individual differences for cybersickness severity. Altogether, this dissertation highlights some of the perceptual consequences of sensory conflict between vestibular and visual signals and provides insights for the potential mechanisms that determine the perception of self-motion and cybersickness in VR

    Characterizing the dynamics of vestibular reflex gain modulation using balance-relevant sensory conflict

    Get PDF
    Electrical vestibular stimulation (EVS) can be used to evoke reflexive body sways as a probe of vestibular control of balance. However, EVS introduces sensory conflict by decoupling vestibular input from actual body motion, prompting the central nervous system (CNS) to potentially perceive vestibular signals as less reliable. In contrast, light touch reduces sway by providing reliable feedback about body motion and spatial orientation. The juxtaposition of reliable and unreliable sensory cues enables exploration of multisensory integration during balance control. I hypothesized that when light touch is available, coherence and gain between EVS input and center of pressure (CoP) output would decrease as the CNS reduces the weighting of vestibular cues. Additionally, I hypothesized that the CNS would require less than 0.5 seconds to adjust weighting of sensory cues upon introduction or removal of light touch. In two experiments, participants stood as still as possible while receiving continuous stochastic EVS (with a frequency of 0-25 Hz, amplitude of ± 4 mA, and a duration of 200-300 seconds), while either: lightly touching a load cell (<2 N); holding their hand above a load cell; or intermittently switching between touching and not touching the load cell. Anterior-posterior (AP) CoP and linear accelerations from body-worn accelerometers were collected to calculate the root mean square (RMS) of AP CoP, as well as the coherence and gain between EVS input and AP CoP or acceleration outputs. Light touch led to a decrease in CoP RMS (mean 49% decrease) with and without EVS. Significant coherence between EVS and AP CoP was observed between 0.5 Hz and 24 Hz in the NO TOUCH condition, and between 0.5 Hz and 30 Hz in the TOUCH condition, with TOUCH having significantly greater coherence from 11 to 30 Hz. Opposite to coherence, EVS-AP CoP gain decreased in the TOUCH condition between 0.5-8 Hz (mean decrease 63%). Among the available acceleration data, only the head exhibited a significant increase in coherence above 10 Hz in the TOUCH condition, compared to the NO TOUCH condition. Light touch reduced CoP displacement, but increased variation in the CoP signal that can be explained by EVS input. Light touch may cause the CNS to attribute EVS signals to head movements and therefore up-weight vestibulocollic responses while downweighting vestibulospinal balance responses. Changes in coherence and gain started before the transition to the NO TOUCH condition and after the transition to the TOUCH condition. The loss of sensory information may be more destabilizing than addition, necessitating anticipatory adjustments. These findings demonstrate the ability of one sensory modality to modulate the utilization of another by the CNS, and highlight asymmetries in the timing of responses to the introduction and removal of sensory information, which may impact behavior.
    • …
    corecore