123 research outputs found

    A reevaluation of achromatic spatio-temporal vision: nonoriented filters are monocular, they adapt, and can be used for decision making at high flicker speeds

    Get PDF
    Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatio-temporal vision. Each has been taken to provide evidence for (i) oriented and (ii) nonoriented spatial-filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: those experiments might have revealed the characteristics of suppression (eg, gain control), not excitation, or merely the isotropic subunits of the oriented detecting mechanisms. To shed light on this, we used all three paradigms to focus on the ‘high-speed’ corner of spatio-temporal vision (low spatial frequency, high temporal frequency), where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular, and dichoptic stimulus presentations. To account for our results, we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable, and their outputs are available to the decision stage following nonlinear contrast transduction. However, the monocular isotropic filters (i) adapt only to high-speed stimuli—consistent with a magnocellular subcortical substrate—and (ii) benefit decision making only for high-speed stimuli (ie, isotropic monocular outputs are available only for high-speed stimuli). According to this model, the visual processes revealed by masking, adaptation, and summation are related but not identical

    A Reaction-Diffusion Model to Capture Disparity Selectivity in Primary Visual Cortex

    Get PDF
    Decades of experimental studies are available on disparity selective cells in visual cortex of macaque and cat. Recently, local disparity map for iso-orientation sites for near-vertical edge preference is reported in area 18 of cat visual cortex. No experiment is yet reported on complete disparity map in V1. Disparity map for layer IV in V1 can provide insight into how disparity selective complex cell receptive field is organized from simple cell subunits. Though substantial amounts of experimental data on disparity selective cells is available, no model on receptive field development of such cells or disparity map development exists in literature. We model disparity selectivity in layer IV of cat V1 using a reaction-diffusion two-eye paradigm. In this model, the wiring between LGN and cortical layer IV is determined by resource an LGN cell has for supporting connections to cortical cells and competition for target space in layer IV. While competing for target space, the same type of LGN cells, irrespective of whether it belongs to left-eye-specific or right-eye-specific LGN layer, cooperate with each other while trying to push off the other type. Our model captures realistic 2D disparity selective simple cell receptive fields, their response properties and disparity map along with orientation and ocular dominance maps. There is lack of correlation between ocular dominance and disparity selectivity at the cell population level. At the map level, disparity selectivity topography is not random but weakly clustered for similar preferred disparities. This is similar to the experimental result reported for macaque. The details of weakly clustered disparity selectivity map in V1 indicate two types of complex cell receptive field organization

    Multi-Timescale Perceptual History Resolves Visual Ambiguity

    Get PDF
    When visual input is inconclusive, does previous experience aid the visual system in attaining an accurate perceptual interpretation? Prolonged viewing of a visually ambiguous stimulus causes perception to alternate between conflicting interpretations. When viewed intermittently, however, ambiguous stimuli tend to evoke the same percept on many consecutive presentations. This perceptual stabilization has been suggested to reflect persistence of the most recent percept throughout the blank that separates two presentations. Here we show that the memory trace that causes stabilization reflects not just the latest percept, but perception during a much longer period. That is, the choice between competing percepts at stimulus reappearance is determined by an elaborate history of prior perception. Specifically, we demonstrate a seconds-long influence of the latest percept, as well as a more persistent influence based on the relative proportion of dominance during a preceding period of at least one minute. In case short-term perceptual history and long-term perceptual history are opposed (because perception has recently switched after prolonged stabilization), the long-term influence recovers after the effect of the latest percept has worn off, indicating independence between time scales. We accommodate these results by adding two positive adaptation terms, one with a short time constant and one with a long time constant, to a standard model of perceptual switching

    Vertical Binocular Disparity is Encoded Implicitly within a Model Neuronal Population Tuned to Horizontal Disparity and Orientation

    Get PDF
    Primary visual cortex is often viewed as a “cyclopean retina”, performing the initial encoding of binocular disparities between left and right images. Because the eyes are set apart horizontally in the head, binocular disparities are predominantly horizontal. Yet, especially in the visual periphery, a range of non-zero vertical disparities do occur and can influence perception. It has therefore been assumed that primary visual cortex must contain neurons tuned to a range of vertical disparities. Here, I show that this is not necessarily the case. Many disparity-selective neurons are most sensitive to changes in disparity orthogonal to their preferred orientation. That is, the disparity tuning surfaces, mapping their response to different two-dimensional (2D) disparities, are elongated along the cell's preferred orientation. Because of this, even if a neuron's optimal 2D disparity has zero vertical component, the neuron will still respond best to a non-zero vertical disparity when probed with a sub-optimal horizontal disparity. This property can be used to decode 2D disparity, even allowing for realistic levels of neuronal noise. Even if all V1 neurons at a particular retinotopic location are tuned to the expected vertical disparity there (for example, zero at the fovea), the brain could still decode the magnitude and sign of departures from that expected value. This provides an intriguing counter-example to the common wisdom that, in order for a neuronal population to encode a quantity, its members must be tuned to a range of values of that quantity. It demonstrates that populations of disparity-selective neurons encode much richer information than previously appreciated. It suggests a possible strategy for the brain to extract rarely-occurring stimulus values, while concentrating neuronal resources on the most commonly-occurring situations

    Adaptive Filtering Enhances Information Transmission in Visual Cortex

    Full text link
    Sensory neuroscience seeks to understand how the brain encodes natural environments. However, neural coding has largely been studied using simplified stimuli. In order to assess whether the brain's coding strategy depend on the stimulus ensemble, we apply a new information-theoretic method that allows unbiased calculation of neural filters (receptive fields) from responses to natural scenes or other complex signals with strong multipoint correlations. In the cat primary visual cortex we compare responses to natural inputs with those to noise inputs matched for luminance and contrast. We find that neural filters adaptively change with the input ensemble so as to increase the information carried by the neural response about the filtered stimulus. Adaptation affects the spatial frequency composition of the filter, enhancing sensitivity to under-represented frequencies in agreement with optimal encoding arguments. Adaptation occurs over 40 s to many minutes, longer than most previously reported forms of adaptation.Comment: 20 pages, 11 figures, includes supplementary informatio

    The Effect of Interocular Phase Difference on Perceived Contrast

    Get PDF
    Binocular vision is traditionally treated as two processes: the fusion of similar images, and the interocular suppression of dissimilar images (e.g. binocular rivalry). Recent work has demonstrated that interocular suppression is phase-insensitive, whereas binocular summation occurs only when stimuli are in phase. But how do these processes affect our perception of binocular contrast? We measured perceived contrast using a matching paradigm for a wide range of interocular phase offsets (0–180°) and matching contrasts (2–32%). Our results revealed a complex interaction between contrast and interocular phase. At low contrasts, perceived contrast reduced monotonically with increasing phase offset, by up to a factor of 1.6. At higher contrasts the pattern was non-monotonic: perceived contrast was veridical for in-phase and antiphase conditions, and monocular presentation, but increased a little at intermediate phase angles. These findings challenge a recent model in which contrast perception is phase-invariant. The results were predicted by a binocular contrast gain control model. The model involves monocular gain controls with interocular suppression from positive and negative phase channels, followed by summation across eyes and then across space. Importantly, this model—applied to conditions with vertical disparity—has only a single (zero) disparity channel and embodies both fusion and suppression processes within a single framework

    Adaptation and Selective Information Transmission in the Cricket Auditory Neuron AN2

    Get PDF
    Sensory systems adapt their neural code to changes in the sensory environment, often on multiple time scales. Here, we report a new form of adaptation in a first-order auditory interneuron (AN2) of crickets. We characterize the response of the AN2 neuron to amplitude-modulated sound stimuli and find that adaptation shifts the stimulus–response curves toward higher stimulus intensities, with a time constant of 1.5 s for adaptation and recovery. The spike responses were thus reduced for low-intensity sounds. We then address the question whether adaptation leads to an improvement of the signal's representation and compare the experimental results with the predictions of two competing hypotheses: infomax, which predicts that information conveyed about the entire signal range should be maximized, and selective coding, which predicts that “foreground” signals should be enhanced while “background” signals should be selectively suppressed. We test how adaptation changes the input–response curve when presenting signals with two or three peaks in their amplitude distributions, for which selective coding and infomax predict conflicting changes. By means of Bayesian data analysis, we quantify the shifts of the measured response curves and also find a slight reduction of their slopes. These decreases in slopes are smaller, and the absolute response thresholds are higher than those predicted by infomax. Most remarkably, and in contrast to the infomax principle, adaptation actually reduces the amount of encoded information when considering the whole range of input signals. The response curve changes are also not consistent with the selective coding hypothesis, because the amount of information conveyed about the loudest part of the signal does not increase as predicted but remains nearly constant. Less information is transmitted about signals with lower intensity

    Relative contributions to vergence eye movements of two binocular cues for motion-in-depth

    Get PDF
    When we track an object moving in depth, our eyes rotate in opposite directions. This type of "disjunctive" eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal

    Spatial Stereoresolution for Depth Corrugations May Be Set in Primary Visual Cortex

    Get PDF
    Stereo “3D” depth perception requires the visual system to extract binocular disparities between the two eyes' images. Several current models of this process, based on the known physiology of primary visual cortex (V1), do this by computing a piecewise-frontoparallel local cross-correlation between the left and right eye's images. The size of the “window” within which detectors examine the local cross-correlation corresponds to the receptive field size of V1 neurons. This basic model has successfully captured many aspects of human depth perception. In particular, it accounts for the low human stereoresolution for sinusoidal depth corrugations, suggesting that the limit on stereoresolution may be set in primary visual cortex. An important feature of the model, reflecting a key property of V1 neurons, is that the initial disparity encoding is performed by detectors tuned to locally uniform patches of disparity. Such detectors respond better to square-wave depth corrugations, since these are locally flat, than to sinusoidal corrugations which are slanted almost everywhere. Consequently, for any given window size, current models predict better performance for square-wave disparity corrugations than for sine-wave corrugations at high amplitudes. We have recently shown that this prediction is not borne out: humans perform no better with square-wave than with sine-wave corrugations, even at high amplitudes. The failure of this prediction raised the question of whether stereoresolution may actually be set at later stages of cortical processing, perhaps involving neurons tuned to disparity slant or curvature. Here we extend the local cross-correlation model to include existing physiological and psychophysical evidence indicating that larger disparities are detected by neurons with larger receptive fields (a size/disparity correlation). We show that this simple modification succeeds in reconciling the model with human results, confirming that stereoresolution for disparity gratings may indeed be limited by the size of receptive fields in primary visual cortex
    corecore