5 research outputs found

    Characterizing receptive field selectivity in area V2

    Get PDF
    The computations performed by neurons in area V1 are reasonably well understood, but computation in subsequent areas such as V2 have been more difficult to characterize. When stimulated with visual stimuli traditionally used to investigate V1, such as sinusoidal gratings, V2 neurons exhibit similar selectivity (but with larger receptive fields, and weaker responses) relative to V1 neurons. However, we find that V2 responses to synthetic stimuli designed to produce naturalistic patterns of joint activity in a model V1 population are more vigorous than responses to control stimuli that lacked this naturalistic structure (Freeman, et. al. 2013). Armed with this signature of V2 computation, we have been investigating how it might arise from canonical computational elements commonly used to explain V1 responses. The invariance of V1 complex cell responses to spatial phase has been previously captured by summing over multiple “subunits” (rectified responses of simple cell-like filters with the same orientation and spatial frequency selectivity, but differing in their receptive field locations). We modeled V2 responses using a similar architecture: V2 subunits were formed from the rectified responses of filters computing the derivatives of the V1 response map over frequencies, orientations, and spatial positions. A V2 complex cell” sums the output of such subunits across frequency, orientation, and position. This model can qualitatively account for much of the behavior of our sample of recorded V2 neurons, including their V1-like spectral tuning in response to sinusoidal gratings as well as the pattern of increased sensitivity to naturalistic images

    Early stages in spatial vision

    No full text
    Despite the ease with which we perceive, it is not clear how the distribution of light across the visual field is analyzed by eye and brain to give rise to our meaningful percepts. One approach to study visual perception is to focus on the very first stages of information processing. Without precise knowledge of the computational processes that take place in early vision, it is difficult, if not impossible, to understand the more complex later stages in detail. Much evidence has accumulated that suggests that the early visual system is comprised of spatial frequency-selective channels. It is generally assumed that in contrast detection circumstances, these channels behave linearly. This assumption has been challenged by one study that reported contrast detection facilitation in the presence of weak levels of luminance noise (Blackwell, 1998). Improved information transmission in the presence of certain levels of externally added noise, i.e. stochastic resonance, is the signature of nonlinear information processing systems. Contrast detection facilitation in the presence of weak levels of 2-D, white luminance noise is explored in Part I. Comparison of contrast thresholds measured in two different tasks, namely signal detection in Experiment la and orientation discrimination in Experiment lb, but in otherwise identical circumstances, demonstrates that the origin of this facilitation effect is not to be found in a higher level task strategy. Minimization of spatial and temporal uncertainty about signal occurrence and the provision of feedback in Experiment 2 suggest that the visibility of low contrast signals is truly enhanced in the presence of weak noise levels. It is demonstrated that these data are consistent with a model of our visual system wherein a linear filter is followed by a nonlinear post-filter stage. The implication of this finding for critical band masking, a technique to infer filter shape from detection in noise data is explored in Part II. In Experiment 3, by making use of low-pass filtered noise, i.e. noise from which all high frequency components have been removed, it is shown that the noise must not be white for detection facilitation to occur. Activation of the signal processing channel by the noise is sufficient. Experiment 4 demonstrates empirically that the power spectral density of the noise does matter in the critical band masking paradigm: Different noise levels lead to different results. A novel way to infer filter shape from these data is suggested. Sinusoidal contrast discrimination in the presence of luminance noise is explored in Part III. Experiment 5 demonstrates that the well-known dipper-shaped threshold-vs.-contrast function (TvC function) changes somewhat in the presence of weak and strong noise levels. To be able to consider the effects of noise on contrast discrimination, irrespective of performance level, a computational contrast perception model was fitted to the data. Analysis of model deviances reveals that addition of weak noise diminishes the facilitating effect of a low contrast masker. Addition of strong noise causes a small rightward shift of the TvC function on double logarithmic coordinates. Whatever mechanism underlies contrast discrimination, future contrast perception models should aim to be able to explain these findings. Recent work has questioned the validity of the current set of models of human spatial vision (Henning & Wichmann, 2007). The framework of a possible new model which might overcome some of the modelling problems is presented in Part IV. The central idea is to combine V1 neuron-like spatial frequency filters with a population output decision rule. Via simulations it is shown that this results in a model which reconciles constant tuning of the building blocks, as has been postulated often for cortical cells, with dynamic tuning properties of psychophysical channels.status: publishe

    Some observations on contrast detection in noise

    No full text
    The standard psychophysical model of our early visual system consists of a linear filter stage, followed by a nonlinearity and an internal noise source. If a rectification mechanism is introduced at the output of the linear filter stage, as has been suggested on some occasions, this model actually predicts that human performance in a classical contrast detection task might benefit from the addition of weak levels of noise. Here, this prediction was tested and confirmed in two contrast detection tasks. In Experiment 1, observers had to discriminate a low-contrast Gabor pattern from a blank. In Experiment 2, observers had to discriminate two low-contrast Gabor patterns identical on all dimensions, except for orientation (-45 degrees vs. +45 degrees). In both experiments, weak-to-modest levels of 2-D, white noise were added to the stimuli. Detection thresholds vary nonmonotonically with noise power, i.e., some noise levels improve contrast detection performance. Both simple uncertainty reduction and an energy discrimination strategy can be excluded as possible explanations for this effect. We present a quantitative model consistent with the effects and discuss the implications.status: publishe

    A neurophysiologically plausible population code model for human contrast discrimination

    No full text
    The pedestal effect is the improvement in the detectability of a sinusoidal grating in the presence of another grating of the same orientation, spatial frequency and phase – usually called the pedestal. Recent evidence has demonstrated that the pedestal effect is differently modified by spectrally flat and notch-filtered noise: the pedestal effect is reduced in flat noise, but virtually disappears in the presence of notched noise (Henning and Wichmann, 2007). Here we consider a network consisting of units whose contrast response functions resemble those of the cortical cells believed to underlie human pattern vision and demonstrate that, when the outputs of multiple units are combined by simple weighted summation – a heuristic decision rule that resembles optimal information combination and produces a contrast-dependent weighting profile – the network produces contrast-discrimination data consistent with psychophysical observations: the pedestal effect is present without noise, reduced in broadband noise, but almost disappears in notched noise. These findings follow naturally from the normalization model of simple cells in primary visual cortex, followed by response-based pooling, and suggest that in processing even low-contrast sinusoidal gratings, the visual system may combine information across neurons tuned to different spatial frequencies and orientations.status: publishe
    corecore