11,482 research outputs found
A Moment-Based Maximum Entropy Model for Fitting Higher-Order Interactions in Neural Data
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting âReliable Momentâ model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns
Signatures of criticality arise in simple neural population models with correlations
Large-scale recordings of neuronal activity make it possible to gain insights
into the collective activity of neural ensembles. It has been hypothesized that
neural populations might be optimized to operate at a 'thermodynamic critical
point', and that this property has implications for information processing.
Support for this notion has come from a series of studies which identified
statistical signatures of criticality in the ensemble activity of retinal
ganglion cells. What are the underlying mechanisms that give rise to these
observations? Here we show that signatures of criticality arise even in simple
feed-forward models of retinal population activity. In particular, they occur
whenever neural population data exhibits correlations, and is randomly
sub-sampled during data analysis. These results show that signatures of
criticality are not necessarily indicative of an optimized coding strategy, and
challenge the utility of analysis approaches based on equilibrium
thermodynamics for understanding partially observed biological systems.Comment: 36 pages, LaTeX; added journal reference on page 1, added link to
code repositor
Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method
Understanding the dynamics of neural networks is a major challenge in
experimental neuroscience. For that purpose, a modelling of the recorded
activity that reproduces the main statistics of the data is required. In a
first part, we present a review on recent results dealing with spike train
statistics analysis using maximum entropy models (MaxEnt). Most of these
studies have been focusing on modelling synchronous spike patterns, leaving
aside the temporal dynamics of the neural activity. However, the maximum
entropy principle can be generalized to the temporal case, leading to Markovian
models where memory effects and time correlations in the dynamics are properly
taken into account. In a second part, we present a new method based on
Monte-Carlo sampling which is suited for the fitting of large-scale
spatio-temporal MaxEnt models. The formalism and the tools presented here will
be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
In the past two decades, functional Magnetic Resonance Imaging has been used
to relate neuronal network activity to cognitive processing and behaviour.
Recently this approach has been augmented by algorithms that allow us to infer
causal links between component populations of neuronal networks. Multiple
inference procedures have been proposed to approach this research question but
so far, each method has limitations when it comes to establishing whole-brain
connectivity patterns. In this work, we discuss eight ways to infer causality
in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality,
Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and
Transfer Entropy. We finish with formulating some recommendations for the
future directions in this area
Revealing networks from dynamics: an introduction
What can we learn from the collective dynamics of a complex network about its
interaction topology? Taking the perspective from nonlinear dynamics, we
briefly review recent progress on how to infer structural connectivity (direct
interactions) from accessing the dynamics of the units. Potential applications
range from interaction networks in physics, to chemical and metabolic
reactions, protein and gene regulatory networks as well as neural circuits in
biology and electric power grids or wireless sensor networks in engineering.
Moreover, we briefly mention some standard ways of inferring effective or
functional connectivity.Comment: Topical review, 48 pages, 7 figure
Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models
The dynamics of complex systems, from financial markets to the brain, can be
monitored in terms of multiple time series of activity of the constituent
units, such as stocks or neurons respectively. While the main focus of time
series analysis is on the magnitude of temporal increments, a significant piece
of information is encoded into the binary projection (i.e. the sign) of such
increments. In this paper we provide further evidence of this by showing strong
nonlinear relations between binary and non-binary properties of financial time
series. These relations are a novel quantification of the fact that extreme
price increments occur more often when most stocks move in the same direction.
We then introduce an information-theoretic approach to the analysis of the
binary signature of single and multiple time series. Through the definition of
maximum-entropy ensembles of binary matrices and their mapping to spin models
in statistical physics, we quantify the information encoded into the simplest
binary properties of real time series and identify the most informative
property given a set of measurements. Our formalism is able to accurately
replicate, and mathematically characterize, the observed binary/non-binary
relations. We also obtain a phase diagram allowing us to identify, based only
on the instantaneous aggregate return of a set of multiple time series, a
regime where the so-called `market mode' has an optimal interpretation in terms
of collective (endogenous) effects, a regime where it is parsimoniously
explained by pure noise, and a regime where it can be regarded as a combination
of endogenous and exogenous factors. Our approach allows us to connect spin
models, simple stochastic processes, and ensembles of time series inferred from
partial information
General anesthesia reduces complexity and temporal asymmetry of the informational structures derived from neural recordings in Drosophila
We apply techniques from the field of computational mechanics to evaluate the
statistical complexity of neural recording data from fruit flies. First, we
connect statistical complexity to the flies' level of conscious arousal, which
is manipulated by general anesthesia (isoflurane). We show that the complexity
of even single channel time series data decreases under anesthesia. The
observed difference in complexity between the two states of conscious arousal
increases as higher orders of temporal correlations are taken into account. We
then go on to show that, in addition to reducing complexity, anesthesia also
modulates the informational structure between the forward- and reverse-time
neural signals. Specifically, using three distinct notions of temporal
asymmetry we show that anesthesia reduces temporal asymmetry on
information-theoretic and information-geometric grounds. In contrast to prior
work, our results show that: (1) Complexity differences can emerge at very
short timescales and across broad regions of the fly brain, thus heralding the
macroscopic state of anesthesia in a previously unforeseen manner, and (2) that
general anesthesia also modulates the temporal asymmetry of neural signals.
Together, our results demonstrate that anesthetized brains become both less
structured and more reversible.Comment: 14 pages, 6 figures. Comments welcome; Added time-reversal analysis,
updated discussion, new figures (Fig. 5 & Fig. 6) and Tables (Tab. 1
Neural Network Parameterizations of Electromagnetic Nucleon Form Factors
The electromagnetic nucleon form-factors data are studied with artificial
feed forward neural networks. As a result the unbiased model-independent
form-factor parametrizations are evaluated together with uncertainties. The
Bayesian approach for the neural networks is adapted for chi2 error-like
function and applied to the data analysis. The sequence of the feed forward
neural networks with one hidden layer of units is considered. The given neural
network represents a particular form-factor parametrization. The so-called
evidence (the measure of how much the data favor given statistical model) is
computed with the Bayesian framework and it is used to determine the best form
factor parametrization.Comment: The revised version is divided into 4 sections. The discussion of the
prior assumptions is added. The manuscript contains 4 new figures and 2 new
tables (32 pages, 15 figures, 2 tables
- âŠ