9,814 research outputs found
Information Diversity in Structure and Dynamics of Simulated Neuronal Networks
Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains
Topological exploration of artificial neuronal network dynamics
One of the paramount challenges in neuroscience is to understand the dynamics
of individual neurons and how they give rise to network dynamics when
interconnected. Historically, researchers have resorted to graph theory,
statistics, and statistical mechanics to describe the spatiotemporal structure
of such network dynamics. Our novel approach employs tools from algebraic
topology to characterize the global properties of network structure and
dynamics.
We propose a method based on persistent homology to automatically classify
network dynamics using topological features of spaces built from various
spike-train distances. We investigate the efficacy of our method by simulating
activity in three small artificial neural networks with different sets of
parameters, giving rise to dynamics that can be classified into four regimes.
We then compute three measures of spike train similarity and use persistent
homology to extract topological features that are fundamentally different from
those used in traditional methods. Our results show that a machine learning
classifier trained on these features can accurately predict the regime of the
network it was trained on and also generalize to other networks that were not
presented during training. Moreover, we demonstrate that using features
extracted from multiple spike-train distances systematically improves the
performance of our method
When do correlations increase with firing rates in recurrent networks?
A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. Because neural spiking is noisy, spiking patterns are often quantified via pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments, is that correlations can increase systematically with firing rate. Theoretical studies have determined that stimulus-dependent correlations that increase with firing rate can have beneficial effects on information coding; however, we still have an incomplete understanding of what circuit mechanisms do, or do not, produce this correlation-firing rate relationship. Here, we studied the relationship between pairwise correlations and firing rates in recurrently coupled excitatory-inhibitory spiking networks with conductance-based synapses. We found that with stronger excitatory coupling, a positive relationship emerged between pairwise correlations and firing rates. To explain these findings, we used linear response theory to predict the full correlation matrix and to decompose correlations in terms of graph motifs. We then used this decomposition to explain why covariation of correlations with firing rate—a relationship previously explained in feedforward networks driven by correlated input—emerges in some recurrent networks but not in others. Furthermore, when correlations covary with firing rate, this relationship is reflected in low-rank structure in the correlation matrix
A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data
Deducing the structure of neural circuits is one of the central problems of
modern neuroscience. Recently-introduced calcium fluorescent imaging methods
permit experimentalists to observe network activity in large populations of
neurons, but these techniques provide only indirect observations of neural
spike trains, with limited time resolution and signal quality. In this work we
present a Bayesian approach for inferring neural circuitry given this type of
imaging data. We model the network activity in terms of a collection of coupled
hidden Markov chains, with each chain corresponding to a single neuron in the
network and the coupling between the chains reflecting the network's
connectivity matrix. We derive a Monte Carlo Expectation--Maximization
algorithm for fitting the model parameters; to obtain the sufficient statistics
in a computationally-efficient manner, we introduce a specialized
blockwise-Gibbs algorithm for sampling from the joint activity of all observed
neurons given the observed fluorescence data. We perform large-scale
simulations of randomly connected neuronal networks with biophysically
realistic parameters and find that the proposed methods can accurately infer
the connectivity in these networks given reasonable experimental and
computational constraints. In addition, the estimation accuracy may be improved
significantly by incorporating prior knowledge about the sparseness of
connectivity in the network, via standard L penalization methods.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS303 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Mammalian Brain As a Network of Networks
Acknowledgements AZ, SG and AL acknowledge support from the Russian Science Foundation (16-12-00077). Authors thank T. Kuznetsova for Fig. 6.Peer reviewedPublisher PD
Emergence of slow-switching assemblies in structured neuronal networks
Unraveling the interplay between connectivity and spatio-temporal dynamics in
neuronal networks is a key step to advance our understanding of neuronal
information processing. Here we investigate how particular features of network
connectivity underpin the propensity of neural networks to generate
slow-switching assembly (SSA) dynamics, i.e., sustained epochs of increased
firing within assemblies of neurons which transition slowly between different
assemblies throughout the network. We show that the emergence of SSA activity
is linked to spectral properties of the asymmetric synaptic weight matrix. In
particular, the leading eigenvalues that dictate the slow dynamics exhibit a
gap with respect to the bulk of the spectrum, and the associated Schur vectors
exhibit a measure of block-localization on groups of neurons, thus resulting in
coherent dynamical activity on those groups. Through simple rate models, we
gain analytical understanding of the origin and importance of the spectral gap,
and use these insights to develop new network topologies with alternative
connectivity paradigms which also display SSA activity. Specifically, SSA
dynamics involving excitatory and inhibitory neurons can be achieved by
modifying the connectivity patterns between both types of neurons. We also show
that SSA activity can occur at multiple timescales reflecting a hierarchy in
the connectivity, and demonstrate the emergence of SSA in small-world like
networks. Our work provides a step towards understanding how network structure
(uncovered through advancements in neuroanatomy and connectomics) can impact on
spatio-temporal neural activity and constrain the resulting dynamics.Comment: The first two authors contributed equally -- 18 pages, including
supplementary material, 10 Figures + 2 SI Figure
Growth-Driven Percolations: The Dynamics of Community Formation in Neuronal Systems
The quintessential property of neuronal systems is their intensive patterns
of selective synaptic connections. The current work describes a physics-based
approach to neuronal shape modeling and synthesis and its consideration for the
simulation of neuronal development and the formation of neuronal communities.
Starting from images of real neurons, geometrical measurements are obtained and
used to construct probabilistic models which can be subsequently sampled in
order to produce morphologically realistic neuronal cells. Such cells are
progressively grown while monitoring their connections along time, which are
analysed in terms of percolation concepts. However, unlike traditional
percolation, the critical point is verified along the growth stages, not the
density of cells, which remains constant throughout the neuronal growth
dynamics. It is shown, through simulations, that growing beta cells tend to
reach percolation sooner than the alpha counterparts with the same diameter.
Also, the percolation becomes more abrupt for higher densities of cells, being
markedly sharper for the beta cells.Comment: 8 pages, 10 figure
- …