58 research outputs found

    Systematic biases in human heading estimation.

    Get PDF
    Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion

    Vestibular heading discrimination and sensitivity to linear acceleration in head and world coordinates

    Get PDF
    Effective navigation and locomotion depend critically on an observer\u27s ability to judge direction of linear self-motion, i.e., heading. The vestibular cue to heading is the direction of inertial acceleration that accompanies transient linear movements. This cue is transduced by the otolith organs. The otoliths also respond to gravitational acceleration, so vestibular heading discrimination could depend on (1) the direction of movement in head coordinates (i.e., relative to the otoliths), (2) the direction of movement in world coordinates (i.e., relative to gravity), or (3) body orientation (i.e., the direction of gravity relative to the otoliths). To quantify these effects, we measured vestibular and visual discrimination of heading along azimuth and elevation dimensions with observers oriented both upright and side-down relative to gravity. We compared vestibular heading thresholds with corresponding measurements of sensitivity to linear motion along lateral and vertical axes of the head (coarse direction discrimination and amplitude discrimination). Neither heading nor coarse direction thresholds depended on movement direction in world coordinates, demonstrating that the nervous system compensates for gravity. Instead, they depended similarly on movement direction in head coordinates (better performance in the horizontal plane) and on body orientation (better performance in the upright orientation). Heading thresholds were correlated with, but significantly larger than, predictions based on sensitivity in the coarse discrimination task. Simulations of a neuron/anti-neuron pair with idealized cosine-tuning properties show that heading thresholds larger than those predicted from coarse direction discrimination could be accounted for by an amplitude-response nonlinearity in the neural representation of inertial motion

    Optic flow detection is not influenced by visual-vestibular congruency

    Get PDF
    Optic flow patterns generated by self-motion relative to the stationary environment result in congruent visual-vestibular self-motion signals. Incongruent signals can arise due to object motion, vestibular dysfunction, or artificial stimulation, which are less common. Hence, we are predominantly exposed to congruent rather than incongruent visual-vestibular stimulation. If the brain takes advantage of this probabilistic association, we expect observers to be more sensitive to visual optic flow that is congruent with ongoing vestibular stimulation. We tested this expectation by measuring the motion coherence threshold, which is the percentage of signal versus noise dots, necessary to detect an optic flow pattern. Observers seated on a hexapod motion platform in front of a screen experienced two sequential intervals. One interval contained optic flow with a given motion coherence and the other contained noise dots only. Observers had to indicate which interval contained the optic flow pattern. The motion coherence threshold was measured for detection of laminar and radial optic flow during leftward/rightward and fore/aft linear self-motion, respectively. We observed no dependence of coherence thresholds on vestibular congruency for either radial or laminar optic flow. Prior studies using similar methods reported both decreases and increases in coherence thresholds in response to congruent vestibular stimulation;our results do not confirm either of these prior reports. While methodological differences may explain the diversity of results, another possibility is that motion coherence thresholds are mediated by neural populations that are either not modulated by vestibular stimulation or that are modulated in a manner that does not depend on congruency

    Quantification of Head Movement Predictability and Implications for Suppression of Vestibular Input during Locomotion

    Get PDF
    Achieved motor movement can be estimated using both sensory and motor signals. The value of motor signals for estimating movement should depend critically on the stereotypy or predictability of the resulting actions. As predictability increases, motor signals become more reliable indicators of achieved movement, so weight attributed to sensory signals should decrease accordingly. Here we describe a method to quantify this predictability for head movement during human locomotion by measuring head motion with an inertial measurement unit (IMU), and calculating the variance explained by the mean movement over one stride, i.e., a metric similar to the coefficient of determination. Predictability exhibits differences across activities, being most predictable during running, and changes over the course of a stride, being least predictable around the time of heel-strike and toe-off. In addition to quantifying predictability, we relate this metric to sensory-motor weighting via a statistically optimal model based on two key assumptions: (1) average head movement provides a conservative estimate of the efference copy prediction, and (2) noise on sensory signals scales with signal magnitude. The model suggests that differences in predictability should lead to changes in the weight attributed to vestibular sensory signals for estimating head movement. In agreement with the model, prior research reports that vestibular perturbations have greatest impact at the time points and during activities where high vestibular weight is predicted. Thus, we propose a unified explanation for time-and activity-dependent modulation of vestibular effects that was lacking previously. Furthermore, the proposed predictability metric constitutes a convenient general method for quantifying any kind of kinematic variability. The probabilistic model is also general;it applies to any situation in which achieved movement is estimated from both motor signals and zero-mean sensory signals with signal-dependent noise

    Systematic biases in human heading estimation.

    Get PDF
    Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion

    The effect of supine body position on human heading perception

    Get PDF
    The use of virtual environments in functional imaging experiments is a promising method to investigate and understand the neural basis of human navigation and self-motion perception. However, the supine position in the fMRI scanner is unnatural for everyday motion. In particular, the head-horizontal self-motion plane is parallel rather than perpendicular to gravity. Earlier studies have shown that perception of heading from visual self-motion stimuli, such as optic flow, can be modified due to visuo-vestibular interactions. With this study, we aimed to identify the effects of the supine body position on visual heading estimation, which is a basic component of human navigation. Visual and vestibular heading judgments were measured separately in 11 healthy subjects in upright and supine body positions. We measured two planes of self-motion, the transverse and the coronal plane, and found that, although vestibular heading perception was strongly modified in a supine position, visual performance, in particular for the preferred head-horizontal (i.e., transverse) plane, did not change. This provides behavioral evidence in humans that direction estimation from self-motion consistent optic flow is not modified by supine body orientation, demonstrating that visual heading estimation is one component of human navigation that is not influenced by the supine body position required for functional brain imaging experiments

    Quantification of Head Movement Predictability and Implications for Suppression of Vestibular Input during Locomotion

    Get PDF
    Achieved motor movement can be estimated using both sensory and motor signals. The value of motor signals for estimating movement should depend critically on the stereotypy or predictability of the resulting actions. As predictability increases, motor signals become more reliable indicators of achieved movement, so weight attributed to sensory signals should decrease accordingly. Here we describe a method to quantify this predictability for head movement during human locomotion by measuring head motion with an inertial measurement unit (IMU), and calculating the variance explained by the mean movement over one stride, i.e., a metric similar to the coefficient of determination. Predictability exhibits differences across activities, being most predictable during running, and changes over the course of a stride, being least predictable around the time of heel-strike and toe-off. In addition to quantifying predictability, we relate this metric to sensory-motor weighting via a statistically optimal model based on two key assumptions: (1) average head movement provides a conservative estimate of the efference copy prediction, and (2) noise on sensory signals scales with signal magnitude. The model suggests that differences in predictability should lead to changes in the weight attributed to vestibular sensory signals for estimating head movement. In agreement with the model, prior research reports that vestibular perturbations have greatest impact at the time points and during activities where high vestibular weight is predicted. Thus, we propose a unified explanation for time-and activity-dependent modulation of vestibular effects that was lacking previously. Furthermore, the proposed predictability metric constitutes a convenient general method for quantifying any kind of kinematic variability. The probabilistic model is also general; it applies to any situation in which achieved movement is estimated from both motor signals and zero-mean sensory signals with signal-dependent noise

    Insufficient compensation for self-motion during perception of object speed: The vestibular Aubert-Fleischl phenomenon

    Get PDF
    To estimate object speed with respect to the self, retinal signals must be summed with extraretinal signals that encode the speed of eye and head movement. Prior work has shown that differences in perceptual estimates of object speed based on retinal and oculomotor signals lead to biased percepts such as the Aubert-Fleischl phenomenon (AF), in which moving targets appear slower when pursued. During whole-body movement, additional extraretinal signals, such as those from the vestibular system, may be used to transform object speed estimates from a head-centered to a world-centered reference frame. Here we demonstrate that whole-body pursuit in the form of passive yaw rotation, which stimulates the semicircular canals of the vestibular system, leads to a slowing of perceived object speed similar to the classic oculomotor AF. We find that the magnitude of the vestibular and oculomotor AF is comparable across a range of speeds, despite the different types of input signal involved. This covariation might hint at a common modality-independent mechanism underlying the AF in both cases

    Vestibulo-Ocular Responses and Dynamic Visual Acuity During Horizontal Rotation and Translation

    Get PDF
    Dynamic visual acuity (DVA) provides an overall functional measure of visual stabilization performance that depends on the vestibulo-ocular reflex (VOR), but also on other processes, including catch-up saccades and likely visual motion processing. Capturing the efficiency of gaze stabilization against head movement as a whole, it is potentially valuable in the clinical context where assessment of overall patient performance provides an important indication of factors impacting patient participation and quality of life. DVA during head rotation (rDVA) has been assessed previously, but to our knowledge, DVA during horizontal translation (tDVA) has not been measured. tDVA can provide a valuable measure of how otolith, rather than canal, function impacts visual acuity. In addition, comparison of DVA during rotation and translation can shed light on whether common factors are limiting DVA performance in both cases. We therefore measured and compared DVA during both passive head rotations (head impulse test) and translations in the same set of healthy subjects (n = 7). In addition to DVA, we computed average VOR gain and retinal slip within and across subjects. We observed that during translation, VOR gain was reduced (VOR during rotation, mean ± SD: position gain = 1.05 ± 0.04, velocity gain = 0.97 ± 0.07; VOR during translation, mean ± SD: position gain = 0.21 ± 0.08, velocity gain = 0.51 ± 0.16), retinal slip was increased, and tDVA was worse than during rotation (average rDVA = 0.32 ± 0.15 logMAR; average tDVA = 0.56 ± 0.09 logMAR, p = 0.02). This suggests that reduced VOR gain leads to worse tDVA, as expected. We conclude with speculation about non-oculomotor factors that could vary across individuals and affect performance similarly during both rotation and translation

    Vestibular Facilitation of Optic Flow Parsing

    Get PDF
    Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components
    • …
    corecore