396 research outputs found
Recommended from our members
The population tracking model:a simple, scalable statistical model for neural population data
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded ([Formula: see text]). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca[Formula: see text] and voltage imaging tools
Sample Path Analysis of Integrate-and-Fire Neurons
Computational neuroscience is concerned with answering two intertwined questions that are based on the assumption that spatio-temporal patterns of spikes form the universal language of the nervous system. First, what function does a specific neural circuitry perform in the elaboration of a behavior? Second, how do neural circuits process behaviorally-relevant information? Non-linear system analysis has proven instrumental in understanding the coding strategies of early neural processing in various sensory modalities. Yet, at higher levels of integration, it fails to help in deciphering the response of assemblies of neurons to complex naturalistic stimuli. If neural activity can be assumed to be primarily driven by the stimulus at early stages of processing, the intrinsic activity of neural circuits interacts with their high-dimensional input to transform it in a stochastic non-linear fashion at the cortical level. As a consequence, any attempt to fully understand the brain through a system analysis approach becomes illusory. However, it is increasingly advocated that neural noise plays a constructive role in neural processing, facilitating information transmission. This prompts to gain insight into the neural code by studying the stochasticity of neuronal activity, which is viewed as biologically relevant. Such an endeavor requires the design of guiding theoretical principles to assess the potential benefits of neural noise. In this context, meeting the requirements of biological relevance and computational tractability, while providing a stochastic description of neural activity, prescribes the adoption of the integrate-and-fire model. In this thesis, founding ourselves on the path-wise description of neuronal activity, we propose to further the stochastic analysis of the integrate-and fire model through a combination of numerical and theoretical techniques. To begin, we expand upon the path-wise construction of linear diffusions, which offers a natural setting to describe leaky integrate-and-fire neurons, as inhomogeneous Markov chains. Based on the theoretical analysis of the first-passage problem, we then explore the interplay between the internal neuronal noise and the statistics of injected perturbations at the single unit level, and examine its implications on the neural coding. At the population level, we also develop an exact event-driven implementation of a Markov network of perfect integrate-and-fire neurons with both time delayed instantaneous interactions and arbitrary topology. We hope our approach will provide new paradigms to understand how sensory inputs perturb neural intrinsic activity and accomplish the goal of developing a new technique for identifying relevant patterns of population activity. From a perturbative perspective, our study shows how injecting frozen noise in different flavors can help characterize internal neuronal noise, which is presumably functionally relevant to information processing. From a simulation perspective, our event-driven framework is amenable to scrutinize the stochastic behavior of simple recurrent motifs as well as temporal dynamics of large scale networks under spike-timing-dependent plasticity
Flexible Bayesian Dynamic Modeling of Correlation and Covariance Matrices
Modeling correlation (and covariance) matrices can be challenging due to the
positive-definiteness constraint and potential high-dimensionality. Our
approach is to decompose the covariance matrix into the correlation and
variance matrices and propose a novel Bayesian framework based on modeling the
correlations as products of unit vectors. By specifying a wide range of
distributions on a sphere (e.g. the squared-Dirichlet distribution), the
proposed approach induces flexible prior distributions for covariance matrices
(that go beyond the commonly used inverse-Wishart prior). For modeling
real-life spatio-temporal processes with complex dependence structures, we
extend our method to dynamic cases and introduce unit-vector Gaussian process
priors in order to capture the evolution of correlation among components of a
multivariate time series. To handle the intractability of the resulting
posterior, we introduce the adaptive -Spherical Hamiltonian Monte
Carlo. We demonstrate the validity and flexibility of our proposed framework in
a simulation study of periodic processes and an analysis of rat's local field
potential activity in a complex sequence memory task.Comment: 49 pages, 15 figure
Information-theoretic investigation of multi-unit activity properties under different stimulus conditions in mouse primary visual cortex
Primary visual cortex (V1) is the first cortical processing level receiving topographically mapped inputs from the retina, relayed through thalamus. Electrophysiological studies discovered its important role in early sensory processing particularly in edge detection in single cells. To this end, little is investigated how these activities relate on a population level. Orientation tuning in mouse V1 has long been reported as salt-and pepper organised, lacking apparent structure as was found in e.g. cat or primates.
This is a novel synthesis of specially designed in-vivo electrophysiological experiments aiming to make certain information-theoretic data analysis approaches viable. Sophisticated state-of-the-art data analysis techniques are applied to answer questions about stimulus information in mouse V1. Multi-unit electrophysiological experiments were devised, performed and evaluated in the anaesthetised and in left hemisphere V1 of the awake behaving, head-fixed mouse. A detailed laboratory and computational analysis is presented validating the use of Multi-Unit-Activity (MUA) and information-theoretic measures. Our results indicate left forward drifting gratings (moving from the temporal to nasal visual field) elicit consistently highest neuronal responses across cortical layers and columns, challenging the common understanding of random organisation. These directional biasses of MUA were also observable on the population level.
In addition to individual multi-unit analyses, population responses in terms of binary word distributions appear more similar between spontaneous activity and responses to natural movies than either/both to moving gratings, suggesting that mouse V1 processes natural scenes differently from sinusoidal drifting gratings. Response pattern distributions for different gratings emerge to be spatially but not orientationally clustered. Further computational analysis suggests population firing rates can partially account for these differences. Electrophysiological experiments in the awake behaving mouse indicate V1 to contain information about behavioural outcome in a GO/NOGO task. This, along with other statistical measures is examined with statistical models such as the population tracking model, which suggest that population interactions are required to explain these observations.Open Acces
Unsupervised multilingual learning
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 241-254).For centuries, scholars have explored the deep links among human languages. In this thesis, we present a class of probabilistic models that exploit these links as a form of naturally occurring supervision. These models allow us to substantially improve performance for core text processing tasks, such as morphological segmentation, part-of-speech tagging, and syntactic parsing. Besides these traditional NLP tasks, we also present a multilingual model for lost language deciphersment. We test this model on the ancient Ugaritic language. Our results show that we can automatically uncover much of the historical relationship between Ugaritic and Biblical Hebrew, a known related language.by Benjamin Snyder.Ph.D
Recommended from our members
Learning Structure in Time Series for Neuroscience and Beyond
Advances in neuroscience are producing data at an astounding rate - data which are fiendishly complex both to process and to interpret. Biological neural networks are high-dimensional, nonlinear, noisy, heterogeneous, and in nearly every way defy the simplifying assumptions of standard statistical methods. In this dissertation we address a number of issues with understanding the structure of neural populations, from the abstract level of how to uncover structure in generic time series, to the practical matter of finding relevant biological structure in state-of-the-art experimental techniques. To learn the structure of generic time series, we develop a new statistical model, which we dub the probabilistic deterministic infinite automata (PDIA), which uses tools from nonparametric Bayesian inference to learn a very general class of sequence models. We show that the models learned by the PDIA often offer better predictive performance and faster inference than Hidden Markov Models, while being significantly more compact than models that simply memorize contexts. For large populations of neurons, models like the PDIA become unwieldy, and we instead investigate ways to robustly reduce the dimensionality of the data. In particular, we adapt the generalized linear model (GLM) framework for regres- sion to the case of matrix completion, which we call the low-dimensional GLM. We show that subspaces and dynamics of neural activity can be accurately recovered from model data, and with only minimal assumptions about the structure of the dynamics can still lead to good predictive performance on real data. Finally, to bridge the gap between recording technology and analysis, particularly as recordings from ever-larger populations of neurons becomes the norm, automated methods for extracting activity from raw recordings become a necessity. We present a number of methods for automatically segmenting biological units from optical imaging data, with applications to light sheet recording of genetically encoded calcium indicator fluorescence in the larval zebrafish, and optical electrophysiology using genetically encoded voltage indicators in culture. Together, these methods are a powerful set of tools for addressing the diverse challenges of modern neuroscience
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
- …