50 research outputs found

    Optimal Population Coding, Revisited

    Get PDF
    Cortical circuits perform the computations underlying rapid perceptual decisions within a few dozen milliseconds with each neuron emitting only a few spikes. Under these conditions, the theoretical analysis of neural population codes is challenging, as the most commonly used theoretical tool – Fisher information – can lead to erroneous conclusions about the optimality of different coding schemes. Here we revisit the effect of tuning function width and correlation structure on neural population codes based on ideal observer analysis in both a discrimination and reconstruction task. We show that the optimal tuning function width and the optimal correlation structure in both paradigms strongly depend on the available decoding time in a very similar way. In contrast, population codes optimized for Fisher information do not depend on decoding time and are severely suboptimal when only few spikes are available. In addition, we use the neurometric functions of the ideal observer in the classification task to investigate the differential coding properties of these Fisher-optimal codes for fine and coarse discrimination. We find that the discrimination error for these codes does not decrease to zero with increasing population size, even in simple coarse discrimination tasks. Our results suggest that quite different population codes may be optimal for rapid decoding in cortical computations than those inferred from the optimization of Fisher information

    Bayesian estimation of orientation preference maps

    No full text
    Imaging techniques such as optical imaging of intrinsic signals, 2-photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial and temporal scales. Here, we present Bayesian methods based on Gaussian processes for extracting topographic maps from functional imaging data. In particular, we focus on the estimation of orientation preference maps (OPMs) from intrinsic signal imaging data. We model the underlying map as a bivariate Gaussian process, with a prior covariance function that reflects known properties of OPMs, and a noise covariance adjusted to the data. The posterior mean can be interpreted as an optimally smoothed estimate of the map, and can be used for model based interpolations of the map from sparse measurements. By sampling from the posterior distribution, we can get error bars on statistical properties such as preferred orientations, pinwheel locations or pinwheel counts. Finally, the use of an explicit probabilistic model facilitates interpretation of parameters and quantitative model comparisons. We demonstrate our model both on simulated data and on intrinsic signaling data from ferret visual cortex

    Bayesian decoding of populations of integrate-and-fire neurons

    No full text

    Toolbox for inference in generalized linear models of spiking neurons

    No full text
    Generalized linear models are increasingly used for analyzing neural data, and to characterize the stimulus dependence and functional connectivity of both single neurons and neural populations. One possibility to extend the computational complexity of these models is to expand the stimulus, and possibly the representation of the spiking history into high dimensional feature spaces. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. In this work, we present a MATLAB toolbox which provides efficient inference methods for parameter fitting. This includes standard maximum a posteriori estimation for Gaussian and Laplacian prior, which is also sometimes referred to as L1- and L2-reguralization. Furthermore, it implements approximate inference techniques for both prior distributions based on the expectation propagation algorithm [1]. In order to model the refractory property and functional couplings between neurons, the spiking history within a population is often represented as responses to a set of predefined basis functions. Most of the basis function sets used so far, are non-orthogonal. Commonly priors are specified without taking the properties of the basis functions into account (uncorrelated Gauss, independent Laplace). However, if basis functions overlap, the coefficients are correlated. As an example application of this toolbox, we analyze the effect of independent prior distributions, if the set of basis functions are non-orthogonal and compare the performance to the orthogonal setting

    Bayesian Population Decoding of Spiking Neurons

    No full text
    This essay takes as its departing point the analysis of the issues surrounding the bad outcomes of the Mexican elementary education students of the English language and the main factors that have an impact on them, as well as some experiences related to the implementation of the formal teaching of the English language in Mexico and some other Latin-American countries that have used independent learning and metacognitive strategies, which indicate its apparent effectiveness to obtain better results. As a conclusion, we remark the need to promote, in the Mexican elementary education, the independent learning and metacognitive strategies as a form to improve English language learning, which requires, among other things, that such strategies and learning are included in a more explicit form in the lesson plans and programs, and the teachers' training for its use in the classroo

    A joint maximum-entropy model for binary neural population patterns and continuous signals

    No full text
    Second-order maximum-entropy models have recently gained much interest for describing the statistics of binary spike trains. Here, we extend this approach to take continuous stimuli into account as well. By constraining on the joint secondorder statistics, we obtain a joint Gaussian-Boltzmann distribution of continuous stimuli and binary neural firing patterns, for which we also compute marginal and conditional distributions. This model has the same computational complexity as pure binary models and fitting it to data is a convex problem. We show that the model can be seen as an extension to the classical spike-triggered average and can be used as a non-linear method for extracting features which a neural population is sensitive to. Further, by calculating the posterior distribution of stimuli given an observed neural response, the model can be used to decode stimuli and yields a natural spike-train metric. Therefore, extending the framework of maximumentropy models to continuous variables allows us to gain novel insights into the relationship between the firing patterns of neural ensembles and the stimuli they are processing

    Support vector machines for an efficient representation of voltage band constraints

    No full text
    Future Smart Grids emphasize the (at least) partial coordination of a large number of stochastic consumers and producers to balance consumption and generation of electrical energy. However, this coordinated and often time-synchronous activation and deactivation of demand and supply may result in the violation of the grid's feasibility constraints, since it may increase statistical simultaneity above the level assumed in the initial design of such networks. The detection and avoidance of such operational infeasibilities is essential for secure grid operations. In order to handle the complexity of the optimization problem at hand, autonomous software agents will play a vital role in future power management systems. The system's complexity is reduced down to several less complex sub-problems, which may then be solved in parallel, based on locally available information. In this paper we present ongoing work on the development of autonomous grid management under stability constraints imposed by the power grid

    Neurometric function analysis of population codes

    No full text
    The relative merits of different population coding schemes have mostly been analyzed in the framework of stimulus reconstruction using Fisher Information. Here, we consider the case of stimulus discrimination in a two alternative forced choice paradigm and compute neurometric functions in terms of the minimal discrimination error and the Jensen-Shannon information to study neural population codes. We first explore the relationship between minimum discrimination error, Jensen-Shannon Information and Fisher Information and show that the discrimination framework is more informative about the coding accuracy than Fisher Information as it defines an error for any pair of possible stimuli. In particular, it includes Fisher Information as a special case. Second, we use the framework to study population codes of angular variables. Specifically, we assess the impact of different noise correlations structures on coding accuracy in long versus short decoding time windows. That is, for long time window we use the common Gaussian noise approximation. To address the case of short time windows we analyze the Ising model with identical noise correlation structure. In this way, we provide a new rigorous framework for assessing the functional consequences of noise correlation structures for the representational accuracy of neural population codes that is in particular applicable to short-time population coding
    corecore