1,379 research outputs found
Cortical Spike Synchrony as a Measure of Input Familiarity
J.G.O. was supported by the Ministerio de Economia y Competividad and FEDER (Spain, project FIS2015-66503-C3-1-P) and the ICREA Academia programme. E.U. acknowledges support from the Scottish Universities Life Sciences Alliance (SULSA) and HPC-Europa2.Peer reviewedPostprin
The geometry of spontaneous spiking in neuronal networks
The mathematical theory of pattern formation in electrically coupled networks
of excitable neurons forced by small noise is presented in this work. Using the
Freidlin-Wentzell large deviation theory for randomly perturbed dynamical
systems and the elements of the algebraic graph theory, we identify and analyze
the main regimes in the network dynamics in terms of the key control
parameters: excitability, coupling strength, and network topology. The analysis
reveals the geometry of spontaneous dynamics in electrically coupled network.
Specifically, we show that the location of the minima of a certain continuous
function on the surface of the unit n-cube encodes the most likely activity
patterns generated by the network. By studying how the minima of this function
evolve under the variation of the coupling strength, we describe the principal
transformations in the network dynamics. The minimization problem is also used
for the quantitative description of the main dynamical regimes and transitions
between them. In particular, for the weak and strong coupling regimes, we
present asymptotic formulas for the network activity rate as a function of the
coupling strength and the degree of the network. The variational analysis is
complemented by the stability analysis of the synchronous state in the strong
coupling regime. The stability estimates reveal the contribution of the network
connectivity and the properties of the cycle subspace associated with the graph
of the network to its synchronization properties. This work is motivated by the
experimental and modeling studies of the ensemble of neurons in the Locus
Coeruleus, a nucleus in the brainstem involved in the regulation of cognitive
performance and behavior
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Nearly extensive sequential memory lifetime achieved by coupled nonlinear neurons
Many cognitive processes rely on the ability of the brain to hold sequences
of events in short-term memory. Recent studies have revealed that such memory
can be read out from the transient dynamics of a network of neurons. However,
the memory performance of such a network in buffering past information has only
been rigorously estimated in networks of linear neurons. When signal gain is
kept low, so that neurons operate primarily in the linear part of their
response nonlinearity, the memory lifetime is bounded by the square root of the
network size. In this work, I demonstrate that it is possible to achieve a
memory lifetime almost proportional to the network size, "an extensive memory
lifetime", when the nonlinearity of neurons is appropriately utilized. The
analysis of neural activity revealed that nonlinear dynamics prevented the
accumulation of noise by partially removing noise in each time step. With this
error-correcting mechanism, I demonstrate that a memory lifetime of order
can be achieved.Comment: 21 pages, 5 figures, the manuscript has been accepted for publication
in Neural Computatio
A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems
In this paper we present a methodological framework that meets novel
requirements emerging from upcoming types of accelerated and highly
configurable neuromorphic hardware systems. We describe in detail a device with
45 million programmable and dynamic synapses that is currently under
development, and we sketch the conceptual challenges that arise from taking
this platform into operation. More specifically, we aim at the establishment of
this neuromorphic system as a flexible and neuroscientifically valuable
modeling tool that can be used by non-hardware-experts. We consider various
functional aspects to be crucial for this purpose, and we introduce a
consistent workflow with detailed descriptions of all involved modules that
implement the suggested steps: The integration of the hardware interface into
the simulator-independent model description language PyNN; a fully automated
translation between the PyNN domain and appropriate hardware configurations; an
executable specification of the future neuromorphic system that can be
seamlessly integrated into this biology-to-hardware mapping process as a test
bench for all software layers and possible hardware design modifications; an
evaluation scheme that deploys models from a dedicated benchmark library,
compares the results generated by virtual or prototype hardware devices with
reference software simulations and analyzes the differences. The integration of
these components into one hardware-software workflow provides an ecosystem for
ongoing preparative studies that support the hardware design process and
represents the basis for the maturity of the model-to-hardware mapping
software. The functionality and flexibility of the latter is proven with a
variety of experimental results
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Mechanisms of Zero-Lag Synchronization in Cortical Motifs
Zero-lag synchronization between distant cortical areas has been observed in
a diversity of experimental data sets and between many different regions of the
brain. Several computational mechanisms have been proposed to account for such
isochronous synchronization in the presence of long conduction delays: Of
these, the phenomenon of "dynamical relaying" - a mechanism that relies on a
specific network motif - has proven to be the most robust with respect to
parameter mismatch and system noise. Surprisingly, despite a contrary belief in
the community, the common driving motif is an unreliable means of establishing
zero-lag synchrony. Although dynamical relaying has been validated in empirical
and computational studies, the deeper dynamical mechanisms and comparison to
dynamics on other motifs is lacking. By systematically comparing
synchronization on a variety of small motifs, we establish that the presence of
a single reciprocally connected pair - a "resonance pair" - plays a crucial
role in disambiguating those motifs that foster zero-lag synchrony in the
presence of conduction delays (such as dynamical relaying) from those that do
not (such as the common driving triad). Remarkably, minor structural changes to
the common driving motif that incorporate a reciprocal pair recover robust
zero-lag synchrony. The findings are observed in computational models of
spiking neurons, populations of spiking neurons and neural mass models, and
arise whether the oscillatory systems are periodic, chaotic, noise-free or
driven by stochastic inputs. The influence of the resonance pair is also robust
to parameter mismatch and asymmetrical time delays amongst the elements of the
motif. We call this manner of facilitating zero-lag synchrony resonance-induced
synchronization, outline the conditions for its occurrence, and propose that it
may be a general mechanism to promote zero-lag synchrony in the brain.Comment: 41 pages, 12 figures, and 11 supplementary figure
Feed-Forward Propagation of Temporal and Rate Information between Cortical Populations during Coherent Activation in Engineered In Vitro Networks.
Transient propagation of information across neuronal assembles is thought to underlie many cognitive processes. However, the nature of the neural code that is embedded within these transmissions remains uncertain. Much of our understanding of how information is transmitted among these assemblies has been derived from computational models. While these models have been instrumental in understanding these processes they often make simplifying assumptions about the biophysical properties of neurons that may influence the nature and properties expressed. To address this issue we created an in vitro analog of a feed-forward network composed of two small populations (also referred to as assemblies or layers) of living dissociated rat cortical neurons. The populations were separated by, and communicated through, a microelectromechanical systems (MEMS) device containing a strip of microscale tunnels. Delayed culturing of one population in the first layer followed by the second a few days later induced the unidirectional growth of axons through the microtunnels resulting in a primarily feed-forward communication between these two small neural populations. In this study we systematically manipulated the number of tunnels that connected each layer and hence, the number of axons providing communication between those populations. We then assess the effect of reducing the number of tunnels has upon the properties of between-layer communication capacity and fidelity of neural transmission among spike trains transmitted across and within layers. We show evidence based on Victor-Purpura's and van Rossum's spike train similarity metrics supporting the presence of both rate and temporal information embedded within these transmissions whose fidelity increased during communication both between and within layers when the number of tunnels are increased. We also provide evidence reinforcing the role of synchronized activity upon transmission fidelity during the spontaneous synchronized network burst events that propagated between layers and highlight the potential applications of these MEMs devices as a tool for further investigation of structure and functional dynamics among neural populations
Model-free reconstruction of neuronal network connectivity from calcium imaging signals
A systematic assessment of global neural network connectivity through direct
electrophysiological assays has remained technically unfeasible even in
dissociated neuronal cultures. We introduce an improved algorithmic approach
based on Transfer Entropy to reconstruct approximations to network structural
connectivities from network activity monitored through calcium fluorescence
imaging. Based on information theory, our method requires no prior assumptions
on the statistics of neuronal firing and neuronal connections. The performance
of our algorithm is benchmarked on surrogate time-series of calcium
fluorescence generated by the simulated dynamics of a network with known
ground-truth topology. We find that the effective network topology revealed by
Transfer Entropy depends qualitatively on the time-dependent dynamic state of
the network (e.g., bursting or non-bursting). We thus demonstrate how
conditioning with respect to the global mean activity improves the performance
of our method. [...] Compared to other reconstruction strategies such as
cross-correlation or Granger Causality methods, our method based on improved
Transfer Entropy is remarkably more accurate. In particular, it provides a good
reconstruction of the network clustering coefficient, allowing to discriminate
between weakly or strongly clustered topologies, whereas on the other hand an
approach based on cross-correlations would invariantly detect artificially high
levels of clustering. Finally, we present the applicability of our method to
real recordings of in vitro cortical cultures. We demonstrate that these
networks are characterized by an elevated level of clustering compared to a
random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted
for publicatio
- …