44 research outputs found
Network adaptation improves temporal representation of naturalistic stimuli in drosophila eye: II Mechanisms
Retinal networks must adapt constantly to best present the ever changing visual world to the brain. Here we test the hypothesis that adaptation is a result of different mechanisms at several synaptic connections within the network. In a companion paper (Part I), we showed that adaptation in the photoreceptors (R1-R6) and large monopolar cells (LMC) of the Drosophila eye improves sensitivity to under-represented signals in seconds by enhancing both the amplitude and frequency distribution of LMCs' voltage responses to repeated naturalistic contrast series. In this paper, we show that such adaptation needs both the light-mediated conductance and feedback-mediated synaptic conductance. A faulty feedforward pathway in histamine receptor mutant flies speeds up the LMC output, mimicking extreme light adaptation. A faulty feedback pathway from L2 LMCs to photoreceptors slows down the LMC output, mimicking dark adaptation. These results underline the importance of network adaptation for efficient coding, and as a mechanism for selectively regulating the size and speed of signals in neurons. We suggest that concert action of many different mechanisms and neural connections are responsible for adaptation to visual stimuli. Further, our results demonstrate the need for detailed circuit reconstructions like that of the Drosophila lamina, to understand how networks process information
Consequences of converting graded to action potentials upon neural information coding and energy efficiency
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ~50% in generator potentials, to ~3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation
The Natural Variation of a Neural Code
The way information is represented by sequences of action potentials of spiking neurons is determined by the input each neuron receives, but also by its biophysics, and the specifics of the circuit in which it is embedded. Even the “code” of identified neurons can vary considerably from individual to individual. Here we compared the neural codes of the identified H1 neuron in the visual systems of two families of flies, blow flies and flesh flies, and explored the effect of the sensory environment that the flies were exposed to during development on the H1 code. We found that the two families differed considerably in the temporal structure of the code, its content and energetic efficiency, as well as the temporal delay of neural response. The differences in the environmental conditions during the flies' development had no significant effect. Our results may thus reflect an instance of a family-specific design of the neural code. They may also suggest that individual variability in information processing by this specific neuron, in terms of both form and content, is regulated genetically
Second Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models
Conventional methods used to characterize multidimensional neural feature selectivity, such as spike-triggered covariance (STC) or maximally informative dimensions (MID), are limited to Gaussian stimuli or are only able to identify a small number of features due to the curse of dimensionality. To overcome these issues, we propose two new dimensionality reduction methods that use minimum and maximum information models. These methods are information theoretic extensions of STC that can be used with non-Gaussian stimulus distributions to find relevant linear subspaces of arbitrary dimensionality. We compare these new methods to the conventional methods in two ways: with biologically-inspired simulated neurons responding to natural images and with recordings from macaque retinal and thalamic cells responding to naturalistic time-varying stimuli. With non-Gaussian stimuli, the minimum and maximum information methods significantly outperform STC in all cases, whereas MID performs best in the regime of low dimensional feature spaces
Modeling convergent ON and OFF pathways in the early visual system
For understanding the computation and function of single neurons in sensory systems, one needs to investigate how sensory stimuli are related to a neuron’s response and which biological mechanisms underlie this relationship. Mathematical models of the stimulus–response relationship have proved very useful in approaching these issues in a systematic, quantitative way. A starting point for many such analyses has been provided by phenomenological “linear–nonlinear” (LN) models, which comprise a linear filter followed by a static nonlinear transformation. The linear filter is often associated with the neuron’s receptive field. However, the structure of the receptive field is generally a result of inputs from many presynaptic neurons, which may form parallel signal processing pathways. In the retina, for example, certain ganglion cells receive excitatory inputs from ON-type as well as OFF-type bipolar cells. Recent experiments have shown that the convergence of these pathways leads to intriguing response characteristics that cannot be captured by a single linear filter. One approach to adjust the LN model to the biological circuit structure is to use multiple parallel filters that capture ON and OFF bipolar inputs. Here, we review these new developments in modeling neuronal responses in the early visual system and provide details about one particular technique for obtaining the required sets of parallel filters from experimental data
A Structured Model of Video Reproduces Primary Visual Cortical Organisation
The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1). In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition
Spike-Triggered Covariance Analysis Reveals Phenomenological Diversity of Contrast Adaptation in the Retina
When visual contrast changes, retinal ganglion cells adapt by adjusting their sensitivity as well as their temporal filtering characteristics. The latter has classically been described by contrast-induced gain changes that depend on temporal frequency. Here, we explored a new perspective on contrast-induced changes in temporal filtering by using spike-triggered covariance analysis to extract multiple parallel temporal filters for individual ganglion cells. Based on multielectrode-array recordings from ganglion cells in the isolated salamander retina, we found that contrast adaptation of temporal filtering can largely be captured by contrast-invariant sets of filters with contrast-dependent weights. Moreover, differences among the ganglion cells in the filter sets and their contrast-dependent contributions allowed us to phenomenologically distinguish three types of filter changes. The first type is characterized by newly emerging features at higher contrast, which can be reproduced by computational models that contain response-triggered gain-control mechanisms. The second type follows from stronger adaptation in the Off pathway as compared to the On pathway in On-Off-type ganglion cells. Finally, we found that, in a subset of neurons, contrast-induced filter changes are governed by particularly strong spike-timing dynamics, in particular by pronounced stimulus-dependent latency shifts that can be observed in these cells. Together, our results show that the contrast dependence of temporal filtering in retinal ganglion cells has a multifaceted phenomenology and that a multi-filter analysis can provide a useful basis for capturing the underlying signal-processing dynamics