7,082 research outputs found

    A Nonlinear Model of Spatiotemporal Retinal Processing: Simulations of X and Y Retinal Ganglion Cell Behavior

    Full text link
    This article describes a nonlinear model of neural processing in the vertebrate retina, comprising model photoreceptors, model push-pull bipolar cells, and model ganglion cells. Previous analyses and simulations have shown that with a choice of parameters that mimics beta cells, the model exhibits X-like linear spatial summation (null response to contrast-reversed gratings) in spite of photoreceptor nonlinearities; on the other hand, a choice of parameters that mimics alpha cells leads to Y-like frequency doubling. This article extends the previous work by showing that the model can replicate qualitatively many of the original findings on X and Y cells with a fixed choice of parameters. The results generally support the hypothesis that X and Y cells can be seen as functional variants of a single neural circuit. The model also suggests that both depolarizing and hyperpolarizing bipolar cells converge onto both ON and OFF ganglion cell types. The push-pull connectivity enables ganglion cells to remain sensitive to deviations about the mean output level of nonlinear photoreceptors. These and other properties of the push-pull model are discussed in the general context of retinal processing of spatiotemporal luminance patterns.Alfred P. Sloan Research Fellowship (BR-3122); Air Force Office of Scientific Research (F49620-92-J-0499

    A Unified Model of Spatiotemporal Processing in the Retina

    Full text link
    A computational model of visual processing in the vertebrate retina provides a unified explanation of a range of data previously treated by disparate models. Three results are reported here: the model proposes a functional explanation for the primary feed-forward retinal circuit found in vertebrate retinae, it shows how this retinal circuit combines nonlinear adaptation with the desirable properties of linear processing, and it accounts for the origin of parallel transient (nonlinear) and sustained (linear) visual processing streams as simple variants of the same retinal circuit. The retina, owing to its accessibility and to its fundamental role in the initial transduction of light into neural signals, is among the most extensively studied neural structures in the nervous system. Since the pioneering anatomical work by RamĂłn y Cajal at the turn of the last century[1], technological advances have abetted detailed descriptions of the physiological, pharmacological, and functional properties of many types of retinal cells. However, the relationship between structure and function in the retina is still poorly understood. This article outlines a computational model developed to address fundamental constraints of biological visual systems. Neurons that process nonnegative input signals-such as retinal illuminance-are subject to an inescapable tradeoff between accurate processing in the spatial and temporal domains. Accurate processing in both domains can be achieved with a model that combines nonlinear mechanisms for temporal and spatial adaptation within three layers of feed-forward processing. The resulting architecture is structurally similar to the feed-forward retinal circuit connecting photoreceptors to retinal ganglion cells through bipolar cells. This similarity suggests that the three-layer structure observed in all vertebrate retinae[2] is a required minimal anatomy for accurate spatiotemporal visual processing. This hypothesis is supported through computer simulations showing that the model's output layer accounts for many properties of retinal ganglion cells[3],[4],[5],[6]. Moreover, the model shows how the retina can extend its dynamic range through nonlinear adaptation while exhibiting seemingly linear behavior in response to a variety of spatiotemporal input stimuli. This property is the basis for the prediction that the same retinal circuit can account for both sustained (X) and transient (Y) cat ganglion cells[7] by simple morphological changes. The ability to generate distinct functional behaviors by simple changes in cell morphology suggests that different functional pathways originating in the retina may have evolved from a unified anatomy designed to cope with the constraints of low-level biological vision.Sloan Fellowshi

    A Neural Network Model for the Spatial and Temporal Response of Retinal Ganglion Cells

    Full text link
    This article introduces a quantitative model of early visual system function. The model is formulated to unify analyses of spatial and temporal information processing by the nervous system. Functional constraints of the model suggest mechanisms analogous to photoreceptors, bipolar cells, and retinal ganglion cells, which can be formally represented with first order differential equations. Preliminary numerical simulations and analytical results show that the same formal mechanisms can explain the behavior of both X (linear) and Y (nonlinear) retinal ganglion cell classes by simple changes in the relative width of the receptive field (RF) center and surround mechanisms. Specifically, an increase in the width of the RF center results in a change from X-like to Y-like response, in agreement with anatomical data on the relationship between α- and -cell RF profiles. Simulations of model response to various spatio-temporal input patterns replicate many of the classical properties of X and Y cells, including transient (Y) versus sustained (X) responses, null-phase responses to alternating gratings in X cells, on-off or frequency doubling responses in Y cells, and phase-independent on-off responses in Y cells at high spatial frequencies. The model's formal mechanisms may be used in other portions of the visual system and more generally in nervous system structures involved with spatio-temporal information processing

    On the complex dynamics of intracellular ganglion cell light responses in the cat retina

    Full text link
    We recorded intracellular responses from cat retinal ganglion cells to sinusoidal flickering lights and compared the response dynamics to a theoretical model based on coupled nonlinear oscillators. Flicker responses for several different spot sizes were separated in a 'smooth' generator (G) potential and eorresponding spike trains. We have previously shown that the G-potential reveals complex, stimulus dependent, oscillatory behavior in response to sinusoidally flickering lights. Such behavior could be simulated by a modified van der Pol oscillator. In this paper, we extend the model to account for spike generation as well, by including extended Hodgkin-Huxley equations describing local membrane properties. We quantified spike responses by several parameters describing the mean and standard deviation of spike burst duration, timing (phase shift) of bursts, and the number of spikes in a burst. The dependence of these response parameters on stimulus frequency and spot size could be reproduced in great detail by coupling the van der Pol oscillator, and Hodgkin-Huxley equations. The model mimics many experimentally observed response patterns, including non-phase-locked irregular oscillations. Our findings suggest that the information in the ganglion cell spike train reflects both intraretinal processing, simulated by the van der Pol oscillator) and local membrane properties described by Hodgkin-Huxley equations. The interplay between these complex processes can be simulated by changing the coupling coefficients between the two oscillators. Our simulations therefore show that irregularities in spike trains, which normally are considered to be noise, may be interpreted as complex oscillations that might earry information.Whitehall Foundation (S93-24

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    A bio-inspired image coder with temporal scalability

    Full text link
    We present a novel bio-inspired and dynamic coding scheme for static images. Our coder aims at reproducing the main steps of the visual stimulus processing in the mammalian retina taking into account its time behavior. The main novelty of this work is to show how to exploit the time behavior of the retina cells to ensure, in a simple way, scalability and bit allocation. To do so, our main source of inspiration will be the biologically plausible retina model called Virtual Retina. Following a similar structure, our model has two stages. The first stage is an image transform which is performed by the outer layers in the retina. Here it is modelled by filtering the image with a bank of difference of Gaussians with time-delays. The second stage is a time-dependent analog-to-digital conversion which is performed by the inner layers in the retina. Thanks to its conception, our coder enables scalability and bit allocation across time. Also, our decoded images do not show annoying artefacts such as ringing and block effects. As a whole, this article shows how to capture the main properties of a biological system, here the retina, in order to design a new efficient coder.Comment: 12 pages; Advanced Concepts for Intelligent Vision Systems (ACIVS 2011

    Intensity Coding in Two-Dimensional Excitable Neural Networks

    Full text link
    In the light of recent experimental findings that gap junctions are essential for low level intensity detection in the sensory periphery, the Greenberg-Hastings cellular automaton is employed to model the response of a two-dimensional sensory network to external stimuli. We show that excitable elements (sensory neurons) that have a small dynamical range are shown to give rise to a collective large dynamical range. Therefore the network transfer (gain) function (which is Hill or Stevens law-like) is an emergent property generated from a pool of small dynamical range cells, providing a basis for a "neural psychophysics". The growth of the dynamical range with the system size is approximately logarithmic, suggesting a functional role for electrical coupling. For a fixed number of neurons, the dynamical range displays a maximum as a function of the refractory period, which suggests experimental tests for the model. A biological application to ephaptic interactions in olfactory nerve fascicles is proposed.Comment: 17 pages, 5 figure
    • …
    corecore