2,461 research outputs found

    Computing motion in the primate's visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory

    From receptive profiles to a metric model of V1

    Full text link
    In this work we show how to construct connectivity kernels induced by the receptive profiles of simple cells of the primary visual cortex (V1). These kernels are directly defined by the shape of such profiles: this provides a metric model for the functional architecture of V1, whose global geometry is determined by the reciprocal interactions between local elements. Our construction adapts to any bank of filters chosen to represent a set of receptive profiles, since it does not require any structure on the parameterization of the family. The connectivity kernel that we define carries a geometrical structure consistent with the well-known properties of long-range horizontal connections in V1, and it is compatible with the perceptual rules synthesized by the concept of association field. These characteristics are still present when the kernel is constructed from a bank of filters arising from an unsupervised learning algorithm.Comment: 25 pages, 18 figures. Added acknowledgement

    Rapid mapping of visual receptive fields by filtered back-projection: application to multi-neuronal electrophysiology and imaging

    Get PDF
    Neurons in the visual system vary widely in the spatiotemporal properties of their receptive fields (RFs), and understanding these variations is key to elucidating how visual information is processed. We present a new approach for mapping RFs based on the filtered back projection (FBP), an algorithm used for tomographic reconstructions. To estimate RFs, a series of bars were flashed across the retina at pseudo‐random positions and at a minimum of five orientations. We apply this method to retinal neurons and show that it can accurately recover the spatial RF and impulse response of ganglion cells recorded on a multi‐electrode array. We also demonstrate its utility for in vivo imaging by mapping the RFs of an array of bipolar cell synapses expressing a genetically encoded Ca2+ indicator. We find that FBP offers several advantages over the commonly used spike‐triggered average (STA): (i) ON and OFF components of a RF can be separated; (ii) the impulse response can be reconstructed at sample rates of 125 Hz, rather than the refresh rate of a monitor; (iii) FBP reveals the response properties of neurons that are not evident using STA, including those that display orientation selectivity, or fire at low mean spike rates; and (iv) the FBP method is fast, allowing the RFs of all the bipolar cell synaptic terminals in a field of view to be reconstructed in under 4 min. Use of the FBP will benefit investigations of the visual system that employ electrophysiology or optical reporters to measure activity across populations of neurons

    A Model of the Ventral Visual System Based on Temporal Stability and Local Memory

    Get PDF
    The cerebral cortex is a remarkably homogeneous structure suggesting a rather generic computational machinery. Indeed, under a variety of conditions, functions attributed to specialized areas can be supported by other regions. However, a host of studies have laid out an ever more detailed map of functional cortical areas. This leaves us with the puzzle of whether different cortical areas are intrinsically specialized, or whether they differ mostly by their position in the processing hierarchy and their inputs but apply the same computational principles. Here we show that the computational principle of optimal stability of sensory representations combined with local memory gives rise to a hierarchy of processing stages resembling the ventral visual pathway when it is exposed to continuous natural stimuli. Early processing stages show receptive fields similar to those observed in the primary visual cortex. Subsequent stages are selective for increasingly complex configurations of local features, as observed in higher visual areas. The last stage of the model displays place fields as observed in entorhinal cortex and hippocampus. The results suggest that functionally heterogeneous cortical areas can be generated by only a few computational principles and highlight the importance of the variability of the input signals in forming functional specialization

    A Computational Study Of The Role Of Spatial Receptive Field Structure In Processing Natural And Non-Natural Scenes

    Get PDF
    The center-surround receptive field structure, ubiquitous in the visual system, is hypothesized to be evolutionarily advantageous in image processing tasks. We address the potential functional benefits and shortcomings of spatial localization and center-surround antagonism in the context of an integrate-and-fire neuronal network model with image-based forcing. Utilizing the sparsity of natural scenes, we derive a compressive-sensing framework for input image reconstruction utilizing evoked neuronal firing rates. We investigate how the accuracy of input encoding depends on the receptive field architecture, and demonstrate that spatial localization in visual stimulus sampling facilitates marked improvements in natural scene processing beyond uniformly-random excitatory connectivity. However, for specific classes of images, we show that spatial localization inherent in physiological receptive fields combined with information loss through nonlinear neuronal network dynamics may underlie common optical illusions, giving a novel explanation for their manifestation. In the context of signal processing, we expect this work may suggest new sampling protocols useful for extending conventional compressive sensing theory

    Context-Sensitive Binding by the Laminar Circuits of V1 and V2: A Unified Model of Perceptual Grouping, Attention, and Orientation Contrast

    Full text link
    A detailed neural model is presented of how the laminar circuits of visual cortical areas V1 and V2 implement context-sensitive binding processes such as perceptual grouping and attention. The model proposes how specific laminar circuits allow the responses of visual cortical neurons to be determined not only by the stimuli within their classical receptive fields, but also to be strongly influenced by stimuli in the extra-classical surround. This context-sensitive visual processing can greatly enhance the analysis of visual scenes, especially those containing targets that are low contrast, partially occluded, or crowded by distractors. We show how interactions of feedforward, feedback and horizontal circuitry can implement several types of contextual processing simultaneously, using shared laminar circuits. In particular, we present computer simulations which suggest how top-down attention and preattentive perceptual grouping, two processes that are fundamental for visual binding, can interact, with attentional enhancement selectively propagating along groupings of both real and illusory contours, thereby showing how attention can selectively enhance object representations. These simulations also illustrate how attention may have a stronger facilitatory effect on low contrast than on high contrast stimuli, and how pop-out from orientation contrast may occur. The specific functional roles which the model proposes for the cortical layers allow several testable neurophysiological predictions to be made. The results presented here simulate only the boundary grouping system of adult cortical architecture. However we also discuss how this model contributes to a larger neural theory of vision which suggests how intracortical and intercortical feedback help to stabilize development and learning within these cortical circuits. Although feedback plays a key role, fast feedforward processing is possible in response to unambiguous information. Model circuits are capable of synchronizing quickly, but context-sensitive persistence of previous events can influence how synchrony develops. Although these results focus on how the interblob cortical processing stream controls boundary grouping and attention, related modeling of the blob cortical processing stream suggests how visible surfaces are formed, and modeling of the motion stream suggests how transient responses to scenic changes can control long-range apparent motion and also attract spatial attention.Defense Advanced Research Projects agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI 94-01659, IRI 97-20333); ONR (N00014-92-J-1309, N00014-95-1-0657

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624
    corecore