60 research outputs found
Efficient Network Reconstruction from Dynamical Cascades Identifies Small-World Topology of Neuronal Avalanches
Cascading activity is commonly found in complex systems with directed
interactions such as metabolic networks, neuronal networks, or disease spreading
in social networks. Substantial insight into a system's organization
can be obtained by reconstructing the underlying functional network architecture
from the observed activity cascades. Here we focus on Bayesian approaches and
reduce their computational demands by introducing the Iterative Bayesian (IB)
and Posterior Weighted Averaging (PWA) methods. We introduce a special case of
PWA, cast in nonparametric form, which we call the normalized count (NC)
algorithm. NC efficiently reconstructs random and small-world functional network
topologies and architectures from subcritical, critical, and supercritical
cascading dynamics and yields significant improvements over commonly used
correlation methods. With experimental data, NC identified a functional and
structural small-world topology and its corresponding traffic in cortical
networks with neuronal avalanche dynamics
Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure
Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law
distribution of population event sizes with an exponent of -3/2. It has been
observed in the superficial layers of cortex both \emph{in vivo} and \emph{in
vitro}. In this paper we analyze the information transmission of a novel
self-organized neural network with active-neuron-dominant structure. Neuronal
avalanches can be observed in this network with appropriate input intensity. We
find that the process of network learning via spike-timing dependent plasticity
dramatically increases the complexity of network structure, which is finally
self-organized to be active-neuron-dominant connectivity. Both the entropy of
activity patterns and the complexity of their resulting post-synaptic inputs
are maximized when the network dynamics are propagated as neuronal avalanches.
This emergent topology is beneficial for information transmission with high
efficiency and also could be responsible for the large information capacity of
this network compared with alternative archetypal networks with different
neural connectivity.Comment: Non-final version submitted to Chao
Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience
This essay is presented with two principal objectives in mind: first, to
document the prevalence of fractals at all levels of the nervous system, giving
credence to the notion of their functional relevance; and second, to draw
attention to the as yet still unresolved issues of the detailed relationships
among power law scaling, self-similarity, and self-organized criticality. As
regards criticality, I will document that it has become a pivotal reference
point in Neurodynamics. Furthermore, I will emphasize the not yet fully
appreciated significance of allometric control processes. For dynamic fractals,
I will assemble reasons for attributing to them the capacity to adapt task
execution to contextual changes across a range of scales. The final Section
consists of general reflections on the implications of the reviewed data, and
identifies what appear to be issues of fundamental importance for future
research in the rapidly evolving topic of this review
Self-organization without conservation: Are neuronal avalanches generically critical?
Recent experiments on cortical neural networks have revealed the existence of
well-defined avalanches of electrical activity. Such avalanches have been
claimed to be generically scale-invariant -- i.e. power-law distributed -- with
many exciting implications in Neuroscience. Recently, a self-organized model
has been proposed by Levina, Herrmann and Geisel to justify such an empirical
finding. Given that (i) neural dynamics is dissipative and (ii) there is a
loading mechanism "charging" progressively the background synaptic strength,
this model/dynamics is very similar in spirit to forest-fire and earthquake
models, archetypical examples of non-conserving self-organization, which have
been recently shown to lack true criticality. Here we show that cortical neural
networks obeying (i) and (ii) are not generically critical; unless parameters
are fine tuned, their dynamics is either sub- or super-critical, even if the
pseudo-critical region is relatively broad. This conclusion seems to be in
agreement with the most recent experimental observations. The main implication
of our work is that, if future experimental research on cortical networks were
to support that truly critical avalanches are the norm and not the exception,
then one should look for more elaborate (adaptive/evolutionary) explanations,
beyond simple self-organization, to account for this.Comment: 28 pages, 11 figures, regular pape
A few strong connections: optimizing information retention in neuronal avalanches
<p>Abstract</p> <p>Background</p> <p>How living neural networks retain information is still incompletely understood. Two prominent ideas on this topic have developed in parallel, but have remained somewhat unconnected. The first of these, the "synaptic hypothesis," holds that information can be retained in synaptic connection strengths, or weights, between neurons. Recent work inspired by statistical mechanics has suggested that networks will retain the most information when their weights are distributed in a skewed manner, with many weak weights and only a few strong ones. The second of these ideas is that information can be represented by stable activity patterns. Multineuron recordings have shown that sequences of neural activity distributed over many neurons are repeated above chance levels when animals perform well-learned tasks. Although these two ideas are compelling, no one to our knowledge has yet linked the predicted optimum distribution of weights to stable activity patterns actually observed in living neural networks.</p> <p>Results</p> <p>Here, we explore this link by comparing stable activity patterns from cortical slice networks recorded with multielectrode arrays to stable patterns produced by a model with a tunable weight distribution. This model was previously shown to capture central features of the dynamics in these slice networks, including neuronal avalanche cascades. We find that when the model weight distribution is appropriately skewed, it correctly matches the distribution of repeating patterns observed in the data. In addition, this same distribution of weights maximizes the capacity of the network model to retain stable activity patterns. Thus, the distribution that best fits the data is also the distribution that maximizes the number of stable patterns.</p> <p>Conclusions</p> <p>We conclude that local cortical networks are very likely to use a highly skewed weight distribution to optimize information retention, as predicted by theory. Fixed distributions impose constraints on learning, however. The network must have mechanisms for preserving the overall weight distribution while allowing individual connection strengths to change with learning.</p
Model-free reconstruction of neuronal network connectivity from calcium imaging signals
A systematic assessment of global neural network connectivity through direct
electrophysiological assays has remained technically unfeasible even in
dissociated neuronal cultures. We introduce an improved algorithmic approach
based on Transfer Entropy to reconstruct approximations to network structural
connectivities from network activity monitored through calcium fluorescence
imaging. Based on information theory, our method requires no prior assumptions
on the statistics of neuronal firing and neuronal connections. The performance
of our algorithm is benchmarked on surrogate time-series of calcium
fluorescence generated by the simulated dynamics of a network with known
ground-truth topology. We find that the effective network topology revealed by
Transfer Entropy depends qualitatively on the time-dependent dynamic state of
the network (e.g., bursting or non-bursting). We thus demonstrate how
conditioning with respect to the global mean activity improves the performance
of our method. [...] Compared to other reconstruction strategies such as
cross-correlation or Granger Causality methods, our method based on improved
Transfer Entropy is remarkably more accurate. In particular, it provides a good
reconstruction of the network clustering coefficient, allowing to discriminate
between weakly or strongly clustered topologies, whereas on the other hand an
approach based on cross-correlations would invariantly detect artificially high
levels of clustering. Finally, we present the applicability of our method to
real recordings of in vitro cortical cultures. We demonstrate that these
networks are characterized by an elevated level of clustering compared to a
random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted
for publicatio
- …