21 research outputs found

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Transformation of stimulus correlations by the retina

    Get PDF
    Redundancies and correlations in the responses of sensory neurons seem to waste neural resources but can carry cues about structured stimuli and may help the brain to correct for response errors. To assess how the retina negotiates this tradeoff, we measured simultaneous responses from populations of ganglion cells presented with natural and artificial stimuli that varied greatly in correlation structure. We found that pairwise correlations in the retinal output remained similar across stimuli with widely different spatio-temporal correlations including white noise and natural movies. Meanwhile, purely spatial correlations tended to increase correlations in the retinal response. Responding to more correlated stimuli, ganglion cells had faster temporal kernels and tended to have stronger surrounds. These properties of individual cells, along with gain changes that opposed changes in effective contrast at the ganglion cell input, largely explained the similarity of pairwise correlations across stimuli where receptive field measurements were possible.Comment: author list corrected in metadat

    What do we mean by the dimensionality of behavior?

    Full text link
    There is growing effort in the "physics of behavior" that aims at complete quantitative characterization of animal movements under more complex, naturalistic conditions. One reaction to the resulting explosion of data is the search for low dimensional structure. Here I try to define more clearly what we mean by the dimensionality of behavior, where observable behavior may consist either of continuous trajectories or sequences of discrete states. This discussion also serves to isolate situations in which the dimensionality of behavior is effectively infinite. I conclude with some more general perspectives about the importance of quantitative phenomenology.Comment: Based in part on a presentation at the Physics of Behavior Virtual Workshop (30 April 2020). Videos of the lectures and discussion are available at https://www.youtube.com/watch?v=xSwWAgp2Vd

    Building population models for large-scale neural recordings: opportunities and pitfalls

    Get PDF
    Modern recording technologies now enable simultaneous recording from large numbers of neurons. This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here we provide a broad overview of recent developments in this area. We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide

    Combining Experiments and Simulations Using the Maximum Entropy Principle

    Get PDF
    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges

    Maximum Entropy Technique and Regularization Functional for Determining the Pharmacokinetic Parameters in DCE-MRI

    Get PDF
    This paper aims to solve the arterial input function (AIF) determination in dynamic contrast-enhanced MRI (DCE-MRI), an important linear ill-posed inverse problem, using the maximum entropy technique (MET) and regularization functionals. In addition, estimating the pharmacokinetic parameters from a DCE-MR image investigations is an urgent need to obtain the precise information about the AIF-the concentration of the contrast agent on the left ventricular blood pool measured over time. For this reason, the main idea is to show how to find a unique solution of linear system of equations generally in the form of y = Ax + b, named an ill-conditioned linear system of equations after discretization of the integral equations, which appear in different tomographic image restoration and reconstruction issues. Here, a new algorithm is described to estimate an appropriate probability distribution function for AIF according to the MET and regularization functionals for the contrast agent concentration when applying Bayesian estimation approach to estimate two different pharmacokinetic parameters. Moreover, by using the proposed approach when analyzing simulated and real datasets of the breast tumors according to pharmacokinetic factors, it indicates that using Bayesian inference-that infer the uncertainties of the computed solutions, and specific knowledge of the noise and errors-combined with the regularization functional of the maximum entropy problem, improved the convergence behavior and led to more consistent morphological and functional statistics and results. Finally, in comparison to the proposed exponential distribution based on MET and Newton's method, or Weibull distribution via the MET and teaching-learning-based optimization (MET/TLBO) in the previous studies, the family of Gamma and Erlang distributions estimated by the new algorithm are more appropriate and robust AIFs

    Searching for collective behavior in a network of real neurons

    Get PDF
    Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.Comment: 24 pages, 19 figure
    corecore