3,185 research outputs found
Approximate Inference for Time-Varying Interactions and Macroscopic Dynamics of Neural Populations
The models in statistical physics such as an Ising model offer a convenient way to characterize stationary activity of neural populations. Such stationary activity of neurons may be expected for recordings from in vitro slices or anesthetized animals. However, modeling activity of cortical circuitries of awake animals has been more challenging because both spike-rates and interactions can change according to sensory stimulation, behavior, or an internal state of the brain. Previous approaches modeling the dynamics of neural interactions suffer from computational cost; therefore, its application was limited to only a dozen neurons. Here by introducing multiple analytic approximation methods to a state-space model of neural population activity, we make it possible to estimate dynamic pairwise interactions of up to 60 neurons. More specifically, we applied the pseudolikelihood approximation to the state-space model, and combined it with the Bethe or TAP mean-field approximation to make the sequential Bayesian estimation of the model parameters possible. The large-scale analysis allows us to investigate dynamics of macroscopic properties of neural circuitries underlying stimulus processing and behavior. We show that the model accurately estimates dynamics of network properties such as sparseness, entropy, and heat capacity by simulated data, and demonstrate utilities of these measures by analyzing activity of monkey V4 neurons as well as a simulated balanced network of spiking neurons.DFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme
Survey propagation at finite temperature: application to a Sourlas code as a toy model
In this paper we investigate a finite temperature generalization of survey
propagation, by applying it to the problem of finite temperature decoding of a
biased finite connectivity Sourlas code for temperatures lower than the
Nishimori temperature. We observe that the result is a shift of the location of
the dynamical critical channel noise to larger values than the corresponding
dynamical transition for belief propagation, as suggested recently by
Migliorini and Saad for LDPC codes. We show how the finite temperature 1-RSB SP
gives accurate results in the regime where competing approaches fail to
converge or fail to recover the retrieval state
Markovian Dynamics on Complex Reaction Networks
Complex networks, comprised of individual elements that interact with each
other through reaction channels, are ubiquitous across many scientific and
engineering disciplines. Examples include biochemical, pharmacokinetic,
epidemiological, ecological, social, neural, and multi-agent networks. A common
approach to modeling such networks is by a master equation that governs the
dynamic evolution of the joint probability mass function of the underling
population process and naturally leads to Markovian dynamics for such process.
Due however to the nonlinear nature of most reactions, the computation and
analysis of the resulting stochastic population dynamics is a difficult task.
This review article provides a coherent and comprehensive coverage of recently
developed approaches and methods to tackle this problem. After reviewing a
general framework for modeling Markovian reaction networks and giving specific
examples, the authors present numerical and computational techniques capable of
evaluating or approximating the solution of the master equation, discuss a
recently developed approach for studying the stationary behavior of Markovian
reaction networks using a potential energy landscape perspective, and provide
an introduction to the emerging theory of thermodynamic analysis of such
networks. Three representative problems of opinion formation, transcription
regulation, and neural network dynamics are used as illustrative examples.Comment: 52 pages, 11 figures, for freely available MATLAB software, see
http://www.cis.jhu.edu/~goutsias/CSS%20lab/software.htm
Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making
Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio
Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics
Neural activity patterns related to behavior occur at many scales in time and
space from the atomic and molecular to the whole brain. Here we explore the
feasibility of interpreting neurophysiological data in the context of many-body
physics by using tools that physicists have devised to analyze comparable
hierarchies in other fields of science. We focus on a mesoscopic level that
offers a multi-step pathway between the microscopic functions of neurons and
the macroscopic functions of brain systems revealed by hemodynamic imaging. We
use electroencephalographic (EEG) records collected from high-density electrode
arrays fixed on the epidural surfaces of primary sensory and limbic areas in
rabbits and cats trained to discriminate conditioned stimuli (CS) in the
various modalities. High temporal resolution of EEG signals with the Hilbert
transform gives evidence for diverse intermittent spatial patterns of amplitude
(AM) and phase modulations (PM) of carrier waves that repeatedly re-synchronize
in the beta and gamma ranges at near zero time lags over long distances. The
dominant mechanism for neural interactions by axodendritic synaptic
transmission should impose distance-dependent delays on the EEG oscillations
owing to finite propagation velocities. It does not. EEGs instead show evidence
for anomalous dispersion: the existence in neural populations of a low velocity
range of information and energy transfers, and a high velocity range of the
spread of phase transitions. This distinction labels the phenomenon but does
not explain it. In this report we explore the analysis of these phenomena using
concepts of energy dissipation, the maintenance by cortex of multiple ground
states corresponding to AM patterns, and the exclusive selection by spontaneous
breakdown of symmetry (SBS) of single states in sequences.Comment: 31 page
Tackling the subsampling problem to infer collective properties from limited data
Complex systems are fascinating because their rich macroscopic properties
emerge from the interaction of many simple parts. Understanding the building
principles of these emergent phenomena in nature requires assessing natural
complex systems experimentally. However, despite the development of large-scale
data-acquisition techniques, experimental observations are often limited to a
tiny fraction of the system. This spatial subsampling is particularly severe in
neuroscience, where only a tiny fraction of millions or even billions of
neurons can be individually recorded. Spatial subsampling may lead to
significant systematic biases when inferring the collective properties of the
entire system naively from a subsampled part. To overcome such biases, powerful
mathematical tools have been developed in the past. In this perspective, we
overview some issues arising from subsampling and review recently developed
approaches to tackle the subsampling problem. These approaches enable one to
assess, e.g., graph structures, collective dynamics of animals, neural network
activity, or the spread of disease correctly from observing only a tiny
fraction of the system. However, our current approaches are still far from
having solved the subsampling problem in general, and hence we conclude by
outlining what we believe are the main open challenges. Solving these
challenges alongside the development of large-scale recording techniques will
enable further fundamental insights into the working of complex and living
systems.Comment: 20 pages, 6 figures, review articl
Ensemble Inhibition and Excitation in the Human Cortex: an Ising Model Analysis with Uncertainties
The pairwise maximum entropy model, also known as the Ising model, has been
widely used to analyze the collective activity of neurons. However, controversy
persists in the literature about seemingly inconsistent findings, whose
significance is unclear due to lack of reliable error estimates. We therefore
develop a method for accurately estimating parameter uncertainty based on
random walks in parameter space using adaptive Markov Chain Monte Carlo after
the convergence of the main optimization algorithm. We apply our method to the
spiking patterns of excitatory and inhibitory neurons recorded with
multielectrode arrays in the human temporal cortex during the wake-sleep cycle.
Our analysis shows that the Ising model captures neuronal collective behavior
much better than the independent model during wakefulness, light sleep, and
deep sleep when both excitatory (E) and inhibitory (I) neurons are modeled;
ignoring the inhibitory effects of I-neurons dramatically overestimates
synchrony among E-neurons. Furthermore, information-theoretic measures reveal
that the Ising model explains about 80%-95% of the correlations, depending on
sleep state and neuron type. Thermodynamic measures show signatures of
criticality, although we take this with a grain of salt as it may be merely a
reflection of long-range neural correlations.Comment: 17 pages, 8 figure
- …