256 research outputs found

    Searching for collective behavior in a network of real neurons

    Get PDF
    Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.Comment: 24 pages, 19 figure

    Optimal Encoding in Stochastic Latent-Variable Models.

    Get PDF
    In this work we explore encoding strategies learned by statistical models of sensory coding in noisy spiking networks. Early stages of sensory communication in neural systems can be viewed as encoding channels in the information-theoretic sense. However, neural populations face constraints not commonly considered in communications theory. Using restricted Boltzmann machines as a model of sensory encoding, we find that networks with sufficient capacity learn to balance precision and noise-robustness in order to adaptively communicate stimuli with varying information content. Mirroring variability suppression observed in sensory systems, informative stimuli are encoded with high precision, at the cost of more variable responses to frequent, hence less informative stimuli. Curiously, we also find that statistical criticality in the neural population code emerges at model sizes where the input statistics are well captured. These phenomena have well-defined thermodynamic interpretations, and we discuss their connection to prevailing theories of coding and statistical criticality in neural populations

    Statistical modelling of neuronal population activity: from data analysis to network function

    Get PDF
    The term statistical modelling refers to a number of abstract models designed to reproduce and understand the statistical properties of the activity of neuronal networks at the population level. Large-scale recordings by multielectrode arrays (MEAs) have now made possible to scale their use to larger groups of neurons. The initial step in this work focused on improving the data analysis pipeline that leads from the experimental protocol used in dense MEA recordings to a clean dataset of sorted spike times, to be used in model training. In collaboration with experimentalists, I contributed to developing a fast and scalable algorithm for spike sorting, which is based on action potential shapes and on the estimated location for the spike. Using the resulting datasets, I investigated the use of restricted Boltzmann machines in the analysis of neural data, finding that they can be used as a tool in the detection of neural ensembles or low-dimensional activity subspaces. I further studied the physical properties of RBMs fitted to neural activity, finding they exhibit signatures of criticality, as observed before in similar models. I discussed possible connections between this phenomenon and the \dynamical" criticality often observed in neuronal networks that exhibit emergent behaviour. Finally, I applied what I found about the structure of the parameter space in statistical models to the discovery of a learning rule that helps long-term storage of previously learned memories in Hopfield networks during sequential learning tasks. Overall, this work aimed to contribute to the computational tools used for analysing and modelling large neuronal populations, on different levels: starting from raw experimental recordings and gradually proceeding towards theoretical aspects

    Visual scene recognition with biologically relevant generative models

    No full text
    This research focuses on developing visual object categorization methodologies that are based on machine learning techniques and biologically inspired generative models of visual scene recognition. Modelling the statistical variability in visual patterns, in the space of features extracted from them by an appropriate low level signal processing technique, is an important matter of investigation for both humans and machines. To study this problem, we have examined in detail two recent probabilistic models of vision: a simple multivariate Gaussian model as suggested by (Karklin & Lewicki, 2009) and a restricted Boltzmann machine (RBM) proposed by (Hinton, 2002). Both the models have been widely used for visual object classification and scene analysis tasks before. This research highlights that these models on their own are not plausible enough to perform the classification task, and suggests Fisher kernel as a means of inducing discrimination into these models for classification power. Our empirical results on standard benchmark data sets reveal that the classification performance of these generative models could be significantly boosted near to the state of the art performance, by drawing a Fisher kernel from compact generative models that computes the data labels in a fraction of total computation time. We compare the proposed technique with other distance based and kernel based classifiers to show how computationally efficient the Fisher kernels are. To the best of our knowledge, Fisher kernel has not been drawn from the RBM before, so the work presented in the thesis is novel in terms of its idea and application to vision problem

    From statistical mechanics to machine learning: effective models for neural activity

    Get PDF
    In the retina, the activity of ganglion cells, which feed information through the optic nerve to the rest of the brain, is all that our brain will ever know about the visual world. The interactions between many neurons are essential to processing visual information and a growing body of evidence suggests that the activity of populations of retinal ganglion cells cannot be understood from knowledge of the individual cells alone. Modelling the probability of which cells in a population will fire or remain silent at any moment in time is a difficult problem because of the exponentially many possible states that can arise, many of which we will never even observe in finite recordings of retinal activity. To model this activity, maximum entropy models have been proposed which provide probabilistic descriptions over all possible states but can be fitted using relatively few well-sampled statistics. Maximum entropy models have the appealing property of being the least biased explanation of the available information, in the sense that they maximise the information theoretic entropy. We investigate this use of maximum entropy models and examine the population sizes and constraints that they require in order to learn nontrivial insights from finite data. Going beyond maximum entropy models, we investigate autoencoders, which provide computationally efficient means of simplifying the activity of retinal ganglion cells

    Probabilistic models for neural populations that naturally capture global coupling and criticality

    Get PDF
    Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations. Recent studies have shown that the summed activity of all neurons strongly shapes the population response. A separate recent finding has been that neural populations also exhibit criticality, an anomalously large dynamic range for the probabilities of different population activity patterns. Motivated by these two observations, we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical. These models consist of an energy function which parametrizes interactions between small groups of neurons, and an arbitrary positive, strictly increasing, and twice differentiable function which maps the energy of a population pattern to its probability. We show that: 1) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons; 2) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity; 3) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data. Our method is independent of the underlying system’s state space; hence, it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality

    Linear response for spiking neuronal networks with unbounded memory

    Get PDF
    We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allows us to predict the influence of a weak amplitude time-dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how linear response is explicitly related to neuronal dynamics with an example, the gIF model, introduced by M. Rudolph and A. Destexhe. This example illustrates the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike statistics. We illustrate our results with numerical simulations.Comment: 60 pages, 8 figure

    Benchmarking spike-based visual recognition: a dataset and evaluation

    Get PDF
    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organisation have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarks and that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field
    corecore