13 research outputs found
Vestibular discrimination results.
<p><b>A)</b> Example results for an individual subject shown in the same format used for the individual identification data (Fig. 2A). Plot illustrates the physical stimulus (x-axis, PSE) perceived equivalent to each investigated reference direction (y-axis). Error bars represent JNDs. <b>B)</b> Mean (+/−SD) bias across subjects in the discrimination procedure (black). Horizontal error bars represent SD of the PSEs. For comparison, mean bias in the identification procedure (grey) is replotted from Fig. 3A. <b>C)</b> Mean (+/−SD) variability (i.e. JND) across subjects in the discrimination procedure (black). Horizontal error bars represent SD of the PSEs. For comparison, mean variability in the identification procedure (grey) is replotted from Fig. 3D.</p
Illustrations of experimental procedures.
<p><b>A)</b> Example movement. On each trial, subjects experienced a movement in the horizontal plane, for example 45°, as shown here. <b>B)</b> Response dial for identification task. Subjects indicated their heading direction after each movement by adjusting the orientation of an arrow within a dial on the screen. The setting shown here matches the movement from panel A). <b>C)</b> Investigated heading directions for the identification procedure (grey) and the control procedure (black).</p
Vestibular discrimination results for one subject at one investigated direction (−90°).
<p><b>A)</b> Trial history for 1U1D staircase block. Upper and lower panels represent the two interleaved staircases that converged from above and below the investigated direction. Dashed line indicates mean of staircase reversals which approximates the PSE. <b>B)</b> Trial history for 1U2D and 2U1D staircase block. Upper and lower panels represent the two interleaved staircases that converged from above and below the mean PSE from the 1U1D block. <b>C)</b> Psychometric function fit to all data from the staircase blocks shown in A) and B). Error bars show 95% and 99% confidence intervals of the fit at the 20%, 50%, and 80% correct points.</p
Visual and vestibular bias (top row) and variability (bottom row) across modalities.
<p>Error bars represent SD. <b>A, </b><b>D)</b> Vestibular identification procedure. <b>B, </b><b>E)</b> Visual identification procedure. <b>C)</b> Data from A) and B) re-plotted without error bars to facilitate comparison. <b>F)</b> Data from D) and E) re-plotted without error bars to facilitate comparison. Note, asterisks indicate heading angles for which bias was <i>most</i> significant, i,e, p<0.05 before Bonferroni correction. The correction is not applied here for illustrative purposes only.</p
Bayesian model and predictions.
<p><b>A)</b> Standard deviation of visual (dotted) and vestibular (solid) likelihoods as a function of heading angle used in the model (adapted from Gu et al. 2010). <b>B)</b> Best-fitting prior distributions for visual and vestibular identification data. Each curve is a sum of two Gaussians centered at +90° and −90°, with equal SD. σ<sub>prior</sub> equals 17° and 50° for visual and vestibular priors, respectively. <b>C)</b> Predicted (black) and observed (grey) vestibular biases. <b>D)</b> Predicted and observed visual biases.</p
Distribution of preferred directions (top row) and resulting population vector decoding predictions (bottom row).
<p><b>A)</b> Preferred directions of otolith afferents. <b>B)</b> Preferred directions of MSTd neurons for vestibular heading stimuli. <b>C)</b> Preferred directions of MSTd neurons for visual heading stimuli. <b>D)</b> Afferent predicted (black) and observed (grey) vestibular bias. <b>E)</b> MSTd predicted and observed vestibular bias. <b>F)</b> MSTd predicted and observed visual bias. Note, panels B), C), and predictions in E), and F) reproduced from Gu et al 2010.</p
Specificity of learning: detection sensitivity.
<p>d-primes measured before and after one week of training with the target presented with collinear flankers (first and second column); contrast thresholds for untrained conditions with the target presented with orthogonal flankers (third and forth columns), with stimuli presented with different global orientations (fifth and sixth), and in different retinal positions (seventh and eighth) with respect to the trained condition. The data refer to the Gabor with a spatial frequency of 4cpd and a target-to-flankers distance of 3λ. At the top of the figure are illustrated examples of the stimuli used.</p
Contrast thresholds for target flanked by collinear and orthogonal flankers, before (pre) and after (post) training.
<p>Mean detection thresholds corresponding to 0.6 (top row) and 0.8 probabilities (bottom row), as a function of the target-to-flanker distances (λ), for the target flanked by collinear flankers (left column) or orthogonal flankers (right column). Data refer to Gabors with a spatial frequency of 4 cpd. Filled circles refer to pre-training measurements, and open circles refer to post-training measurements. Error bars ±1 s.e.m.</p
Specificity of learning: Contrast thresholds.
<p>Contrast thresholds measured before and after one week of training, with the target presented with collinear flankers (first and second column); contrast thresholds for untrained conditions with the target presented with orthogonal flankers (third and forth columns), with stimuli presented with different global orientations (fifth and sixth), and in different retinal positions (seventh and eighth) with respect to the trained condition. The data refer to a Gabor with a spatial frequency of 4cpd and a target-to-flankers distance of 3λ. At the top of the figure are illustrated examples of the stimuli used.</p
Detection sensitivity for target flanked by collinear flankers, before (pre) and after (post) training.
<p>Mean d-primes as a function of the target-to-flanker distances (λ) for the target flanked by collinear flankers. Data refer to Gabors with a spatial frequency of 4 cpd. Filled circles refer to pre-training measurements, and open circles refer to post-training measurements. Error bars ±1 s.e.m.</p