8 research outputs found

    Characterizing receptive field selectivity in area V2

    Get PDF
    The computations performed by neurons in area V1 are reasonably well understood, but computation in subsequent areas such as V2 have been more difficult to characterize. When stimulated with visual stimuli traditionally used to investigate V1, such as sinusoidal gratings, V2 neurons exhibit similar selectivity (but with larger receptive fields, and weaker responses) relative to V1 neurons. However, we find that V2 responses to synthetic stimuli designed to produce naturalistic patterns of joint activity in a model V1 population are more vigorous than responses to control stimuli that lacked this naturalistic structure (Freeman, et. al. 2013). Armed with this signature of V2 computation, we have been investigating how it might arise from canonical computational elements commonly used to explain V1 responses. The invariance of V1 complex cell responses to spatial phase has been previously captured by summing over multiple “subunits” (rectified responses of simple cell-like filters with the same orientation and spatial frequency selectivity, but differing in their receptive field locations). We modeled V2 responses using a similar architecture: V2 subunits were formed from the rectified responses of filters computing the derivatives of the V1 response map over frequencies, orientations, and spatial positions. A V2 complex cell” sums the output of such subunits across frequency, orientation, and position. This model can qualitatively account for much of the behavior of our sample of recorded V2 neurons, including their V1-like spectral tuning in response to sinusoidal gratings as well as the pattern of increased sensitivity to naturalistic images

    Opposing effects of selectivity and invariance in peripheral vision

    No full text
    Sensory processing necessitates discarding some information, in service of preserving and re-formatting more behaviorally relevant information. Sensory neurons seem to achieve this by responding selectively to particular combinations of features in their inputs, while averaging over or ignoring irrelevant combinations. Here, we expose the perceptual implications of this tradeoff between selectivity and invariance, using stimuli and tasks that explicitly reveal their opposing effects on discrimination performance. We generated texture stimuli with statistics derived from natural photographs, and asked observers to perform two different tasks: Discrimination between images drawn from families with different statistics, and discrimination between image samples with identical statistics. For both tasks, the performance of an ideal observer improves with stimulus size. In contrast, humans became better at family discrimination but worse at sample discrimination. We demonstrate through simulations that these behaviors arise naturally in an observer model that relies on a common set of physiologically plausible local statistical measurements for both tasks

    Representing “stuff” in visual cortex

    No full text

    Slow gain fluctuations limit benefits of temporal integration in visual cortex

    No full text
    corecore