8,801 research outputs found

    Lack of the Sodium-Driven Chloride Bicarbonate Exchanger NCBE Impairs Visual Function in the Mouse Retina

    Get PDF
    Regulation of ion and pH homeostasis is essential for normal neuronal function. The sodium-driven chloride bicarbonate exchanger NCBE (Slc4a10), a member of the SLC4 family of bicarbonate transporters, uses the transmembrane gradient of sodium to drive cellular net uptake of bicarbonate and to extrude chloride, thereby modulating both intracellular pH (pHi) and chloride concentration ([Cl-]i) in neurons. Here we show that NCBE is strongly expressed in the retina. As GABAA receptors conduct both chloride and bicarbonate, we hypothesized that NCBE may be relevant for GABAergic transmission in the retina. Importantly, we found a differential expression of NCBE in bipolar cells: whereas NCBE was expressed on ON and OFF bipolar cell axon terminals, it only localized to dendrites of OFF bipolar cells. On these compartments, NCBE colocalized with the main neuronal chloride extruder KCC2, which renders GABA hyperpolarizing. NCBE was also expressed in starburst amacrine cells, but was absent from neurons known to depolarize in response to GABA, like horizontal cells. Mice lacking NCBE showed decreased visual acuity and contrast sensitivity in behavioral experiments and smaller b-wave amplitudes and longer latencies in electroretinograms. Ganglion cells from NCBE-deficient mice also showed altered temporal response properties. In summary, our data suggest that NCBE may serve to maintain intracellular chloride and bicarbonate concentration in retinal neurons. Consequently, lack of NCBE in the retina may result in changes in pHi regulation and chloride-dependent inhibition, leading to altered signal transmission and impaired visual function

    How lateral inhibition and fast retinogeniculo-cortical oscillations create vision: A new hypothesis

    Get PDF
    The role of the physiological processes involved in human vision escapes clarification in current literature. Many unanswered questions about vision include: 1) whether there is more to lateral inhibition than previously proposed, 2) the role of the discs in rods and cones, 3) how inverted images on the retina are converted to erect images for visual perception, 4) what portion of the image formed on the retina is actually processed in the brain, 5) the reason we have an after-image with antagonistic colors, and 6) how we remember space. This theoretical article attempts to clarify some of the physiological processes involved with human vision. The global integration of visual information is conceptual; therefore, we include illustrations to present our theory. Universally, the eyeball is 2.4 cm and works together with membrane potential, correspondingly representing the retinal layers,photoreceptors, and cortex. Images formed within the photoreceptors must first be converted into chemical signals on the photoreceptors’ individual discs and the signals at each disc are transduced from light photons into electrical signals. We contend that the discs code the electrical signals into accurate distances and are shown in our figures. The pre-existing oscillations among the various cortices including the striate and parietal cortex,and the retina work in unison to create an infrastructure of visual space that functionally ‘‘places” the objects within this ‘‘neural” space. The horizontal layers integrate all discs accurately to create a retina that is pre-coded for distance. Our theory suggests image inversion never takes place on the retina,but rather images fall onto the retina as compressed and coiled, then amplified through lateral inhibition through intensification and amplification on the OFF-center cones. The intensified and amplified images are decompressed and expanded in the brain, which become the images we perceive as external vision

    Distinct roles for inhibition in spatial and temporal tuning of local edge detectors in the rabbit retina.

    Get PDF
    This paper examines the role of inhibition in generating the receptive-field properties of local edge detector (LED) ganglion cells in the rabbit retina. We confirm that the feed-forward inhibition is largely glycinergic but, contrary to a recent report, our data demonstrate that the glycinergic inhibition contributes to temporal tuning for the OFF and ON inputs to the LEDs by delaying the onset of spiking; this delay was more pronounced for the ON inputs (∼ 340 ms) than the OFF inputs (∼ 12 ms). Blocking glycinergic transmission reduced the delay to spike onset and increased the responses to flickering stimuli at high frequencies. Analysis of the synaptic conductances indicates that glycinergic amacrine cells affect temporal tuning through both postsynaptic inhibition of the LEDs and presynaptic modulation of the bipolar cells that drive the LEDs. The results also confirm that presynaptic GABAergic transmission contributes significantly to the concentric surround antagonism in LEDs; however, unlike presumed LEDs in the mouse retina, the surround is only partly generated by spiking amacrine cells

    Information recovery from rank-order encoded images

    Get PDF
    The time to detection of a visual stimulus by the primate eye is recorded at 100 – 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ‘perceptual information preservation algorithm’, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpe’s retinal model

    A Unified Model of Spatiotemporal Processing in the Retina

    Full text link
    A computational model of visual processing in the vertebrate retina provides a unified explanation of a range of data previously treated by disparate models. Three results are reported here: the model proposes a functional explanation for the primary feed-forward retinal circuit found in vertebrate retinae, it shows how this retinal circuit combines nonlinear adaptation with the desirable properties of linear processing, and it accounts for the origin of parallel transient (nonlinear) and sustained (linear) visual processing streams as simple variants of the same retinal circuit. The retina, owing to its accessibility and to its fundamental role in the initial transduction of light into neural signals, is among the most extensively studied neural structures in the nervous system. Since the pioneering anatomical work by Ramón y Cajal at the turn of the last century[1], technological advances have abetted detailed descriptions of the physiological, pharmacological, and functional properties of many types of retinal cells. However, the relationship between structure and function in the retina is still poorly understood. This article outlines a computational model developed to address fundamental constraints of biological visual systems. Neurons that process nonnegative input signals-such as retinal illuminance-are subject to an inescapable tradeoff between accurate processing in the spatial and temporal domains. Accurate processing in both domains can be achieved with a model that combines nonlinear mechanisms for temporal and spatial adaptation within three layers of feed-forward processing. The resulting architecture is structurally similar to the feed-forward retinal circuit connecting photoreceptors to retinal ganglion cells through bipolar cells. This similarity suggests that the three-layer structure observed in all vertebrate retinae[2] is a required minimal anatomy for accurate spatiotemporal visual processing. This hypothesis is supported through computer simulations showing that the model's output layer accounts for many properties of retinal ganglion cells[3],[4],[5],[6]. Moreover, the model shows how the retina can extend its dynamic range through nonlinear adaptation while exhibiting seemingly linear behavior in response to a variety of spatiotemporal input stimuli. This property is the basis for the prediction that the same retinal circuit can account for both sustained (X) and transient (Y) cat ganglion cells[7] by simple morphological changes. The ability to generate distinct functional behaviors by simple changes in cell morphology suggests that different functional pathways originating in the retina may have evolved from a unified anatomy designed to cope with the constraints of low-level biological vision.Sloan Fellowshi

    A Nonlinear Model of Spatiotemporal Retinal Processing: Simulations of X and Y Retinal Ganglion Cell Behavior

    Full text link
    This article describes a nonlinear model of neural processing in the vertebrate retina, comprising model photoreceptors, model push-pull bipolar cells, and model ganglion cells. Previous analyses and simulations have shown that with a choice of parameters that mimics beta cells, the model exhibits X-like linear spatial summation (null response to contrast-reversed gratings) in spite of photoreceptor nonlinearities; on the other hand, a choice of parameters that mimics alpha cells leads to Y-like frequency doubling. This article extends the previous work by showing that the model can replicate qualitatively many of the original findings on X and Y cells with a fixed choice of parameters. The results generally support the hypothesis that X and Y cells can be seen as functional variants of a single neural circuit. The model also suggests that both depolarizing and hyperpolarizing bipolar cells converge onto both ON and OFF ganglion cell types. The push-pull connectivity enables ganglion cells to remain sensitive to deviations about the mean output level of nonlinear photoreceptors. These and other properties of the push-pull model are discussed in the general context of retinal processing of spatiotemporal luminance patterns.Alfred P. Sloan Research Fellowship (BR-3122); Air Force Office of Scientific Research (F49620-92-J-0499

    Dendritic and axonal targeting patterns of a genetically-specified class of retinal ganglion cells that participate in image-forming circuits.

    Get PDF
    BackgroundThere are numerous functional types of retinal ganglion cells (RGCs), each participating in circuits that encode a specific aspect of the visual scene. This functional specificity is derived from distinct RGC morphologies and selective synapse formation with other retinal cell types; yet, how these properties are established during development remains unclear. Islet2 (Isl2) is a LIM-homeodomain transcription factor expressed in the developing retina, including approximately 40% of all RGCs, and has previously been implicated in the subtype specification of spinal motor neurons. Based on this, we hypothesized that Isl2+ RGCs represent a related subset that share a common function.ResultsWe morphologically and molecularly characterized Isl2+ RGCs using a transgenic mouse line that expresses GFP in the cell bodies, dendrites and axons of Isl2+ cells (Isl2-GFP). Isl2-GFP RGCs have distinct morphologies and dendritic stratification patterns within the inner plexiform layer and project to selective visual nuclei. Targeted filling of individual cells reveals that the majority of Isl2-GFP RGCs have dendrites that are monostratified in layer S3 of the IPL, suggesting they are not ON-OFF direction-selective ganglion cells. Molecular analysis shows that most alpha-RGCs, indicated by expression of SMI-32, are also Isl2-GFP RGCs. Isl2-GFP RGCs project to most retino-recipient nuclei during early development, but specifically innervate the dorsal lateral geniculate nucleus and superior colliculus (SC) at eye opening. Finally, we show that the segregation of Isl2+ and Isl2- RGC axons in the SC leads to the segregation of functional RGC types.ConclusionsTaken together, these data suggest that Isl2+ RGCs comprise a distinct class and support a role for Isl2 as an important component of a transcription factor code specifying functional visual circuits. Furthermore, this study describes a novel genetically-labeled mouse line that will be a valuable resource in future investigations of the molecular mechanisms of visual circuit formation

    General features of the retinal connectome determine the computation of motion anticipation

    Get PDF
    Motion anticipation allows the visual system to compensate for the slow speed of phototransduction so that a moving object can be accurately located. This correction is already present in the signal that ganglion cells send from the retina but the biophysical mechanisms underlying this computation are not known. Here we demonstrate that motion anticipation is computed autonomously within the dendritic tree of each ganglion cell and relies on feedforward inhibition. The passive and non-linear interaction of excitatory and inhibitory synapses enables the somatic voltage to encode the actual position of a moving object instead of its delayed representation. General rather than specific features of the retinal connectome govern this computation: an excess of inhibitory inputs over excitatory, with both being randomly distributed, allows tracking of all directions of motion, while the average distance between inputs determines the object velocities that can be compensated for
    corecore