5 research outputs found

    Vertical Binocular Disparity is Encoded Implicitly within a Model Neuronal Population Tuned to Horizontal Disparity and Orientation

    Get PDF
    Primary visual cortex is often viewed as a β€œcyclopean retina”, performing the initial encoding of binocular disparities between left and right images. Because the eyes are set apart horizontally in the head, binocular disparities are predominantly horizontal. Yet, especially in the visual periphery, a range of non-zero vertical disparities do occur and can influence perception. It has therefore been assumed that primary visual cortex must contain neurons tuned to a range of vertical disparities. Here, I show that this is not necessarily the case. Many disparity-selective neurons are most sensitive to changes in disparity orthogonal to their preferred orientation. That is, the disparity tuning surfaces, mapping their response to different two-dimensional (2D) disparities, are elongated along the cell's preferred orientation. Because of this, even if a neuron's optimal 2D disparity has zero vertical component, the neuron will still respond best to a non-zero vertical disparity when probed with a sub-optimal horizontal disparity. This property can be used to decode 2D disparity, even allowing for realistic levels of neuronal noise. Even if all V1 neurons at a particular retinotopic location are tuned to the expected vertical disparity there (for example, zero at the fovea), the brain could still decode the magnitude and sign of departures from that expected value. This provides an intriguing counter-example to the common wisdom that, in order for a neuronal population to encode a quantity, its members must be tuned to a range of values of that quantity. It demonstrates that populations of disparity-selective neurons encode much richer information than previously appreciated. It suggests a possible strategy for the brain to extract rarely-occurring stimulus values, while concentrating neuronal resources on the most commonly-occurring situations

    Spatial Stereoresolution for Depth Corrugations May Be Set in Primary Visual Cortex

    Get PDF
    Stereo β€œ3D” depth perception requires the visual system to extract binocular disparities between the two eyes' images. Several current models of this process, based on the known physiology of primary visual cortex (V1), do this by computing a piecewise-frontoparallel local cross-correlation between the left and right eye's images. The size of the β€œwindow” within which detectors examine the local cross-correlation corresponds to the receptive field size of V1 neurons. This basic model has successfully captured many aspects of human depth perception. In particular, it accounts for the low human stereoresolution for sinusoidal depth corrugations, suggesting that the limit on stereoresolution may be set in primary visual cortex. An important feature of the model, reflecting a key property of V1 neurons, is that the initial disparity encoding is performed by detectors tuned to locally uniform patches of disparity. Such detectors respond better to square-wave depth corrugations, since these are locally flat, than to sinusoidal corrugations which are slanted almost everywhere. Consequently, for any given window size, current models predict better performance for square-wave disparity corrugations than for sine-wave corrugations at high amplitudes. We have recently shown that this prediction is not borne out: humans perform no better with square-wave than with sine-wave corrugations, even at high amplitudes. The failure of this prediction raised the question of whether stereoresolution may actually be set at later stages of cortical processing, perhaps involving neurons tuned to disparity slant or curvature. Here we extend the local cross-correlation model to include existing physiological and psychophysical evidence indicating that larger disparities are detected by neurons with larger receptive fields (a size/disparity correlation). We show that this simple modification succeeds in reconciling the model with human results, confirming that stereoresolution for disparity gratings may indeed be limited by the size of receptive fields in primary visual cortex

    The spatial resolutions of stereo and motion perception and their neural basis

    Get PDF
    PhD ThesisDepth perception requires finding matching features between the two eye’s images to estimate binocular disparity. This process has been successfully modelled using local cross-correlation. The model is based on the known physiology of primary visual cortex (V1) and has explained many aspects of stereo vision including why spatial stereoresolution is low compared to the resolution for luminance patterns, suggesting that the limit on spatial stereoresolution is set in V1. We predicted that this model would perform better at detecting square-wave disparity gratings, consisting of regions of locally constant disparity, than sine-waves which are slanted almost everywhere. We confirmed this through computational modelling and performed psychophysical experiments to test whether human performance followed the predictions of the model. We found that humans perform equally well with both waveforms. This contradicted the model’s predictions raising the question of whether spatial stereoresolution may not be limited in V1 after all or whether changing the model to include more of the known physiology may make it consistent with human performance. We incorporated the known size-disparity correlation into the model, giving disparity detectors with larger preferred disparities larger correlation windows, and found that this modified model explained the new human results. This provides further evidence that spatial stereoresolution is limited in V1. Based on previous evidence that MT neurons respond well to transparent motion in different depth planes we predicted that the spatial resolution of joint motion/disparity perception would be limited by the significantly larger MT receptive field sizes and therefore be much lower than the resolution for pure disparity. We tested this using a new joint motion/disparity grating, designed to require the detection of conjunctions between motion and disparity. We found little difference between the resolutions for disparity and joint gratings, contradicting our predictions and suggesting that a different area than MT was used

    The spatial resolutions of stereo and motion perception and their neural basis

    Get PDF
    Depth perception requires finding matching features between the two eye’s images to estimate binocular disparity. This process has been successfully modelled using local cross-correlation. The model is based on the known physiology of primary visual cortex (V1) and has explained many aspects of stereo vision including why spatial stereoresolution is low compared to the resolution for luminance patterns, suggesting that the limit on spatial stereoresolution is set in V1. We predicted that this model would perform better at detecting square-wave disparity gratings, consisting of regions of locally constant disparity, than sine-waves which are slanted almost everywhere. We confirmed this through computational modelling and performed psychophysical experiments to test whether human performance followed the predictions of the model. We found that humans perform equally well with both waveforms. This contradicted the model’s predictions raising the question of whether spatial stereoresolution may not be limited in V1 after all or whether changing the model to include more of the known physiology may make it consistent with human performance. We incorporated the known size-disparity correlation into the model, giving disparity detectors with larger preferred disparities larger correlation windows, and found that this modified model explained the new human results. This provides further evidence that spatial stereoresolution is limited in V1. Based on previous evidence that MT neurons respond well to transparent motion in different depth planes we predicted that the spatial resolution of joint motion/disparity perception would be limited by the significantly larger MT receptive field sizes and therefore be much lower than the resolution for pure disparity. We tested this using a new joint motion/disparity grating, designed to require the detection of conjunctions between motion and disparity. We found little difference between the resolutions for disparity and joint gratings, contradicting our predictions and suggesting that a different area than MT was used.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore