6 research outputs found
Spike-Triggered Covariance Analysis Reveals Phenomenological Diversity of Contrast Adaptation in the Retina
When visual contrast changes, retinal ganglion cells adapt by adjusting their sensitivity as well as their temporal filtering characteristics. The latter has classically been described by contrast-induced gain changes that depend on temporal frequency. Here, we explored a new perspective on contrast-induced changes in temporal filtering by using spike-triggered covariance analysis to extract multiple parallel temporal filters for individual ganglion cells. Based on multielectrode-array recordings from ganglion cells in the isolated salamander retina, we found that contrast adaptation of temporal filtering can largely be captured by contrast-invariant sets of filters with contrast-dependent weights. Moreover, differences among the ganglion cells in the filter sets and their contrast-dependent contributions allowed us to phenomenologically distinguish three types of filter changes. The first type is characterized by newly emerging features at higher contrast, which can be reproduced by computational models that contain response-triggered gain-control mechanisms. The second type follows from stronger adaptation in the Off pathway as compared to the On pathway in On-Off-type ganglion cells. Finally, we found that, in a subset of neurons, contrast-induced filter changes are governed by particularly strong spike-timing dynamics, in particular by pronounced stimulus-dependent latency shifts that can be observed in these cells. Together, our results show that the contrast dependence of temporal filtering in retinal ganglion cells has a multifaceted phenomenology and that a multi-filter analysis can provide a useful basis for capturing the underlying signal-processing dynamics
Recommended from our members
A low-rank method for characterizing high-level neural computations
The signal transformations that take place in high-level sensory regions of the brain remain enigmatic because of the many nonlinear transformations that separate responses of these neurons from the input stimuli. One would like to have dimensionality reduction methods that can describe responses of such neurons in terms of operations on a large but still manageable set of relevant input features. A number of methods have been developed for this purpose, but often these methods rely on the expansion of the input space to capture as many relevant stimulus components as statistically possible. This expansion leads to a lower effective sampling thereby reducing the accuracy of the estimated components. Alternatively, so-called low-rank methods explicitly search for a small number of components in the hope of achieving higher estimation accuracy. Even with these methods, however, noise in the neural responses can force the models to estimate more components than necessary, again reducing the methods' accuracy. Here we describe how a flexible regularization procedure, together with an explicit rank constraint, can strongly improve the estimation accuracy compared to previous methods suitable for characterizing neural responses to natural stimuli. Applying the proposed low-rank method to responses of auditory neurons in the songbird brain, we find multiple relevant components making up the receptive field for each neuron and characterize their computations in terms of logical OR and AND computations. The results highlight potential differences in how invariances are constructed in visual and auditory systems
Recommended from our members
Identifying Functional Bases for Multidimensional Neural Computations
Current dimensionality reduction methods can identify relevant subspaces for neural computations, but do not favor one basis over the other within the relevant subspace. Finding the appropriate basis can further simplify the description of the nonlinear computation with respect to the relevant variables, making it easier to elucidate the underlying neural computation and make hypotheses about the neural circuitry giving rise to the observed responses. Part of the problem is that, although some of the dimensionality reduction methods can identify many of the relevant dimensions, it is usually difficult to map out and/or interpret the nonlinear transformation with respect to more than a few relevant dimensions simultaneously without some simplifying
assumptions. While recent approaches make it possible to create predictive models based on many relevant dimensions simultaneously, there still remains the need to relate such predictive models to the mechanistic descriptions of the operation of underlying neural circuitry. Here we demonstrate that transforming to a basis within the relevant subspace where the neural computation is best described by a given nonlinear function often makes it easier to interpret the computation and describe it with a small number of parameters. We refer to the corresponding basis as the functional basis, and illustrate the utility of such transformation in the context of logical OR and logical AND functions. We show that although dimensionality reduction methods
such as spike-triggered covariance are able to find a relevant subspace, they often produce dimensions that are difficult to interpret and do not correspond to a functional basis. The functional features can be found using a maximum likelihood approach. The results are illustrated using simulated neurons and recordings from retinal ganglion cells. The resulting features are uniquely defined, non-orthogonal, and make it easier to relate computational and mechanistic models to each other
Recommended from our members
Identifying Functional Bases for Multidimensional Neural Computations
Current dimensionality reduction methods can identify relevant subspaces for neural computations, but do not favor one basis over the other within the relevant subspace. Finding the appropriate basis can further simplify the description of the nonlinear computation with respect to the relevant variables, making it easier to elucidate the underlying neural computation and make hypotheses about the neural circuitry giving rise to the observed responses. Part of the problem is that, although some of the dimensionality reduction methods can identify many of the relevant dimensions, it is usually difficult to map out and/or interpret the nonlinear transformation with respect to more than a few relevant dimensions simultaneously without some simplifying assumptions. While recent approaches make it possible to create predictive models based on many relevant dimensions simultaneously, there still remains the need to relate such predictive models to the mechanistic descriptions of the operation of underlying neural circuitry. Here we demonstrate that transforming to a basis within the relevant subspace where the neural computation is best described by a given nonlinear function often makes it easier to interpret the computation and describe it with a small number of parameters. We refer to the corresponding basis as the functional basis, and illustrate the utility of such transformation in the context of logical OR and logical AND functions. We show that although dimensionality reduction methods such as spike-triggered covariance are able to find a relevant subspace, they often produce dimensions that are difficult to interpret and do not correspond to a functional basis. The functional features can be found using a maximum likelihood approach. The results are illustrated using simulated neurons and recordings from retinal ganglion cells. The resulting features are uniquely defined, non-orthogonal, and make it easier to relate computational and mechanistic models to each other