39 research outputs found
Dynamic predictive coding by the retina
Retinal ganglion cells convey the visual image from the eye to the brain. They generally encode local differences in space and changes in time rather than the raw image intensity. This can be seen as a strategy of predictive coding, adapted through evolution to the average image statistics of the natural environment. Yet animals encounter many environments with visual statistics different from the average scene. Here we show that when this happens, the retina adjusts its processing dynamically. The spatio-temporal receptive fields of retinal ganglion cells change after a few seconds in a new environment. The changes are adaptive, in that the new receptive field improves predictive coding under the new image statistics. We show that a network model with plastic synapses can account for the large variety of observed adaptations
Retinal Adaptation to Object Motion
Due to fixational eye movements, the image on the retina is always in motion, even when one views a stationary scene. When an object moves within the scene, the corresponding patch of retina experiences a different motion trajectory than the surrounding region. Certain retinal ganglion cells respond selectively to this condition, when the motion in the cell's receptive field center is different from that in the surround. Here we show that this response is strongest at the very onset of differential motion, followed by gradual adaptation with a time course of several seconds. Different subregions of a ganglion cell's receptive field can adapt independently. The circuitry responsible for differential motion adaptation lies in the inner retina. Several candidate mechanisms were tested, and the adaptation most likely results from synaptic depression at the synapse from bipolar to ganglion cell. Similar circuit mechanisms may act more generally to emphasize novel features of a visual stimulus
Recommended from our members
A Retinal Circuit That Computes Object Motion
Certain ganglion cells in the retina respond sensitively to differential motion between the receptive field center and surround, as produced by an object moving over the background, but are strongly suppressed by global image motion, as produced by the observer's head or eye movements. We investigated the circuit basis for this object motion sensitive (OMS) response by recording intracellularly from all classes of retinal interneurons while simultaneously recording the spiking output of many ganglion cells. Fast, transient bipolar cells respond linearly to motion in the receptive field center. The synaptic output from their terminals is rectified and then pooled by the OMS ganglion cell. A type of polyaxonal amacrine cell is driven by motion in the surround, again via pooling of rectified inputs, but from a different set of bipolar cell terminals. By direct intracellular current injection, we found that these polyaxonal amacrine cells selectively suppress the synaptic input of OMS ganglion cells. A quantitative model of these circuit elements and their interactions explains how an important visual computation is accomplished by retinal neurons and synapses.Molecular and Cellular Biolog
The Projective Field of a Retinal Amacrine Cell
In sensory systems, neurons are generally characterized by their receptive field, namely the sensitivity to activity patterns at the input of the circuit. To assess the role of the neuron in the system, one must also know its projective field, namely the spatiotemporal effects the neuron exerts on all of the outputs of the circuit. We studied both the receptive and projective fields of an amacrine interneuron in the salamander retina. This amacrine type has a sustained OFF response with a small receptive field, but its output projects over a much larger region. Unlike other amacrine cells, this type is remarkably promiscuous and affects nearly every ganglion cell within reach of its dendrites. Its activity modulates the sensitivity of visual responses in ganglion cells but leaves their kinetics unchanged. The projective field displays a center-surround structure: depolarizing a single amacrine suppresses the visual sensitivity of ganglion cells nearby and enhances it at greater distances. This change in sign is seen even within the receptive field of one ganglion cell; thus, the modulation occurs presynaptically on bipolar cell terminals, most likely via GABAB receptors. Such an antagonistic projective field could contribute to the mechanisms of the retina for predictive coding
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution
Genomic (DNA) sequences encode an enormous amount of information for gene
regulation and protein synthesis. Similar to natural language models,
researchers have proposed foundation models in genomics to learn generalizable
features from unlabeled genome data that can then be fine-tuned for downstream
tasks such as identifying regulatory elements. Due to the quadratic scaling of
attention, previous Transformer-based genomic models have used 512 to 4k tokens
as context (<0.001% of the human genome), significantly limiting the modeling
of long-range interactions in DNA. In addition, these methods rely on
tokenizers to aggregate meaningful DNA units, losing single nucleotide
resolution where subtle genetic variations can completely alter protein
function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large
language model based on implicit convolutions was shown to match attention in
quality while allowing longer context lengths and lower time complexity.
Leveraging Hyenas new long-range capabilities, we present HyenaDNA, a genomic
foundation model pretrained on the human reference genome with context lengths
of up to 1 million tokens at the single nucleotide-level, an up to 500x
increase over previous dense attention-based models. HyenaDNA scales
sub-quadratically in sequence length (training up to 160x faster than
Transformer), uses single nucleotide tokens, and has full global context at
each layer. We explore what longer context enables - including the first use of
in-context learning in genomics for simple adaptation to novel tasks without
updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide
Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 17 datasets
using a model with orders of magnitude less parameters and pretraining data. On
the GenomicBenchmarks, HyenaDNA surpasses SotA on all 8 datasets on average by
+9 accuracy points
Segregation of object and background motion in the retina
An important task in vision is to detect objects moving within a stationary scene. During normal viewing this is complicated by the presence of eye movements that continually scan the image across the retina, even during fixation. To detect moving objects, the brain must distinguish local motion within the scene from the global retinal image drift due to fixational eye movements. We have found that this process begins in the retina: a subset of retinal ganglion cells responds to motion in the receptive field centre, but only if the wider surround moves with a different trajectory. This selectivity for differential motion is independent of direction, and can be explained by a model of retinal circuitry that invokes pooling over nonlinear interneurons. The suppression by global image motion is probably mediated by polyaxonal, wide-field amacrine cells with transient responses. We show how a population of ganglion cells selective for differential motion can rapidly flag moving objects, and even segregate multiple moving objects
A Synaptic Mechanism for Temporal Filtering of Visual Signals
The visual system transmits information about fast and slow changes in light intensity through separate neural pathways. We used in vivo imaging to investigate how bipolar cells transmit these signals to the inner retina. We found that the volume of the synaptic terminal is an intrinsic property that contributes to different temporal filters. Individual cells transmit through multiple terminals varying in size, but smaller terminals generate faster and larger calcium transients to trigger vesicle release with higher initial gain, followed by more profound adaptation. Smaller terminals transmitted higher stimulus frequencies more effectively. Modeling global calcium dynamics triggering vesicle release indicated that variations in the volume of presynaptic compartments contribute directly to all these differences in response dynamics. These results indicate how one neuron can transmit different temporal components in the visual signal through synaptic terminals of varying geometries with different adaptational properties