100 research outputs found
Wild oscillations in a nonlinear neuron model with resets: (II) Mixed-mode oscillations
This work continues the analysis of complex dynamics in a class of
bidimensional nonlinear hybrid dynamical systems with resets modeling neuronal
voltage dynamics with adaptation and spike emission. We show that these models
can generically display a form of mixed-mode oscillations (MMOs), which are
trajectories featuring an alternation of small oscillations with spikes or
bursts (multiple consecutive spikes). The mechanism by which these are
generated relies fundamentally on the hybrid structure of the flow: invariant
manifolds of the continuous dynamics govern small oscillations, while discrete
resets govern the emission of spikes or bursts, contrasting with classical MMO
mechanisms in ordinary differential equations involving more than three
dimensions and generally relying on a timescale separation. The decomposition
of mechanisms reveals the geometrical origin of MMOs, allowing a relatively
simple classification of points on the reset manifold associated to specific
numbers of small oscillations. We show that the MMO pattern can be described
through the study of orbits of a discrete adaptation map, which is singular as
it features discrete discontinuities with unbounded left- and
right-derivatives. We study orbits of the map via rotation theory for
discontinuous circle maps and elucidate in detail complex behaviors arising in
the case where MMOs display at most one small oscillation between each
consecutive pair of spikes
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Sensitivity to the cutoff value in the quadratic adaptive integrate-and-fire model
The quadratic adaptive integrate-and-fire model (Izhikecih 2003, 2007) is
recognized as very interesting for its computational efficiency and its ability
to reproduce many behaviors observed in cortical neurons. For this reason it is
currently widely used, in particular for large scale simulations of neural
networks. This model emulates the dynamics of the membrane potential of a
neuron together with an adaptation variable. The subthreshold dynamics is
governed by a two-parameter differential equation, and a spike is emitted when
the membrane potential variable reaches a given cutoff value. Subsequently the
membrane potential is reset, and the adaptation variable is added a fixed value
called the spike-triggered adaptation parameter. We show in this note that when
the system does not converge to an equilibrium point, both variables of the
subthreshold dynamical system blow up in finite time whatever the parameters of
the dynamics. The cutoff is therefore essential for the model to be well
defined and simulated. The divergence of the adaptation variable makes the
system very sensitive to the cutoff: changing this parameter dramatically
changes the spike patterns produced. Furthermore from a computational
viewpoint, the fact that the adaptation variable blows up and the very sharp
slope it has when the spike is emitted implies that the time step of the
numerical simulation needs to be very small (or adaptive) in order to catch an
accurate value of the adaptation at the time of the spike. It is not the case
for the similar quartic (Touboul 2008) and exponential (Brette and Gerstner
2005) models whose adaptation variable does not blow up in finite time, and
which are therefore very robust to changes in the cutoff value
Threshold Curve for the Excitability of Bidimensional Spiking Neurons
International audienceWe shed light on the threshold for spike initiation in two-dimensional neuron models. A threshold criterion that depends on both the membrane voltage and the recovery variable is proposed. This approach provides a simple and unified framework that accounts for numerous voltage threshold properties including adaptation, variability and time-dependent dynamics. In addition, neural features such as accommodation, inhibition-induced spike, and post-inhibitory (-excitatory) facilitation are the direct consequences of the existence of a threshold curve. Implications for neural modeling are also discussed
A Markovian event-based framework for stochastic spiking neural networks
In spiking neural networks, the information is conveyed by the spike times,
that depend on the intrinsic dynamics of each neuron, the input they receive
and on the connections between neurons. In this article we study the Markovian
nature of the sequence of spike times in stochastic neural networks, and in
particular the ability to deduce from a spike train the next spike time, and
therefore produce a description of the network activity only based on the spike
times regardless of the membrane potential process.
To study this question in a rigorous manner, we introduce and study an
event-based description of networks of noisy integrate-and-fire neurons, i.e.
that is based on the computation of the spike times. We show that the firing
times of the neurons in the networks constitute a Markov chain, whose
transition probability is related to the probability distribution of the
interspike interval of the neurons in the network. In the cases where the
Markovian model can be developed, the transition probability is explicitly
derived in such classical cases of neural networks as the linear
integrate-and-fire neuron models with excitatory and inhibitory interactions,
for different types of synapses, possibly featuring noisy synaptic integration,
transmission delays and absolute and relative refractory period. This covers
most of the cases that have been investigated in the event-based description of
spiking deterministic neural networks
Synchronization of electrically coupled resonate-and-fire neurons
Electrical coupling between neurons is broadly present across brain areas and
is typically assumed to synchronize network activity. However, intrinsic
properties of the coupled cells can complicate this simple picture. Many cell
types with strong electrical coupling have been shown to exhibit resonant
properties, and the subthreshold fluctuations arising from resonance are
transmitted through electrical synapses in addition to action potentials. Using
the theory of weakly coupled oscillators, we explore the effect of both
subthreshold and spike-mediated coupling on synchrony in small networks of
electrically coupled resonate-and-fire neurons, a hybrid neuron model with
linear subthreshold dynamics and discrete post-spike reset. We calculate the
phase response curve using an extension of the adjoint method that accounts for
the discontinuity in the dynamics. We find that both spikes and resonant
subthreshold fluctuations can jointly promote synchronization. The subthreshold
contribution is strongest when the voltage exhibits a significant post-spike
elevation in voltage, or plateau. Additionally, we show that the geometry of
trajectories approaching the spiking threshold causes a "reset-induced shear"
effect that can oppose synchrony in the presence of network asymmetry, despite
having no effect on the phase-locking of symmetrically coupled pairs
The Brain on Low Power Architectures - Efficient Simulation of Cortical Slow Waves and Asynchronous States
Efficient brain simulation is a scientific grand challenge, a
parallel/distributed coding challenge and a source of requirements and
suggestions for future computing architectures. Indeed, the human brain
includes about 10^15 synapses and 10^11 neurons activated at a mean rate of
several Hz. Full brain simulation poses Exascale challenges even if simulated
at the highest abstraction level. The WaveScalES experiment in the Human Brain
Project (HBP) has the goal of matching experimental measures and simulations of
slow waves during deep-sleep and anesthesia and the transition to other brain
states. The focus is the development of dedicated large-scale
parallel/distributed simulation technologies. The ExaNeSt project designs an
ARM-based, low-power HPC architecture scalable to million of cores, developing
a dedicated scalable interconnect system, and SWA/AW simulations are included
among the driving benchmarks. At the joint between both projects is the INFN
proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation
engine. DPSNN can be configured to stress either the networking or the
computation features available on the execution platforms. The simulation
stresses the networking component when the neural net - composed by a
relatively low number of neurons, each one projecting thousands of synapses -
is distributed over a large number of hardware cores. When growing the number
of neurons per core, the computation starts to be the dominating component for
short range connections. This paper reports about preliminary performance
results obtained on an ARM-based HPC prototype developed in the framework of
the ExaNeSt project. Furthermore, a comparison is given of instantaneous power,
total energy consumption, execution time and energetic cost per synaptic event
of SWA/AW DPSNN simulations when executed on either ARM- or Intel-based server
platforms
- …