99 research outputs found

    On the simulation of nonlinear bidimensional spiking neuron models

    Full text link
    Bidimensional spiking models currently gather a lot of attention for their simplicity and their ability to reproduce various spiking patterns of cortical neurons, and are particularly used for large network simulations. These models describe the dynamics of the membrane potential by a nonlinear differential equation that blows up in finite time, coupled to a second equation for adaptation. Spikes are emitted when the membrane potential blows up or reaches a cutoff value. The precise simulation of the spike times and of the adaptation variable is critical for it governs the spike pattern produced, and is hard to compute accurately because of the exploding nature of the system at the spike times. We thoroughly study the precision of fixed time-step integration schemes for this type of models and demonstrate that these methods produce systematic errors that are unbounded, as the cutoff value is increased, in the evaluation of the two crucial quantities: the spike time and the value of the adaptation variable at this time. Precise evaluation of these quantities therefore involve very small time steps and long simulation times. In order to achieve a fixed absolute precision in a reasonable computational time, we propose here a new algorithm to simulate these systems based on a variable integration step method that either integrates the original ordinary differential equation or the equation of the orbits in the phase plane, and compare this algorithm with fixed time-step Euler scheme and other more accurate simulation algorithms

    Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections

    Full text link
    Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192x192 modules, each composed of 1250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz clock rates. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table

    Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections

    Full text link
    Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192x192 modules, each composed of 1250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz clock rates. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table

    Sensitivity to the cutoff value in the quadratic adaptive integrate-and-fire model

    Get PDF
    The quadratic adaptive integrate-and-fire model (Izhikecih 2003, 2007) is recognized as very interesting for its computational efficiency and its ability to reproduce many behaviors observed in cortical neurons. For this reason it is currently widely used, in particular for large scale simulations of neural networks. This model emulates the dynamics of the membrane potential of a neuron together with an adaptation variable. The subthreshold dynamics is governed by a two-parameter differential equation, and a spike is emitted when the membrane potential variable reaches a given cutoff value. Subsequently the membrane potential is reset, and the adaptation variable is added a fixed value called the spike-triggered adaptation parameter. We show in this note that when the system does not converge to an equilibrium point, both variables of the subthreshold dynamical system blow up in finite time whatever the parameters of the dynamics. The cutoff is therefore essential for the model to be well defined and simulated. The divergence of the adaptation variable makes the system very sensitive to the cutoff: changing this parameter dramatically changes the spike patterns produced. Furthermore from a computational viewpoint, the fact that the adaptation variable blows up and the very sharp slope it has when the spike is emitted implies that the time step of the numerical simulation needs to be very small (or adaptive) in order to catch an accurate value of the adaptation at the time of the spike. It is not the case for the similar quartic (Touboul 2008) and exponential (Brette and Gerstner 2005) models whose adaptation variable does not blow up in finite time, and which are therefore very robust to changes in the cutoff value

    Wild oscillations in a nonlinear neuron model with resets: (II) Mixed-mode oscillations

    Full text link
    This work continues the analysis of complex dynamics in a class of bidimensional nonlinear hybrid dynamical systems with resets modeling neuronal voltage dynamics with adaptation and spike emission. We show that these models can generically display a form of mixed-mode oscillations (MMOs), which are trajectories featuring an alternation of small oscillations with spikes or bursts (multiple consecutive spikes). The mechanism by which these are generated relies fundamentally on the hybrid structure of the flow: invariant manifolds of the continuous dynamics govern small oscillations, while discrete resets govern the emission of spikes or bursts, contrasting with classical MMO mechanisms in ordinary differential equations involving more than three dimensions and generally relying on a timescale separation. The decomposition of mechanisms reveals the geometrical origin of MMOs, allowing a relatively simple classification of points on the reset manifold associated to specific numbers of small oscillations. We show that the MMO pattern can be described through the study of orbits of a discrete adaptation map, which is singular as it features discrete discontinuities with unbounded left- and right-derivatives. We study orbits of the map via rotation theory for discontinuous circle maps and elucidate in detail complex behaviors arising in the case where MMOs display at most one small oscillation between each consecutive pair of spikes

    Threshold Curve for the Excitability of Bidimensional Spiking Neurons

    Get PDF
    International audienceWe shed light on the threshold for spike initiation in two-dimensional neuron models. A threshold criterion that depends on both the membrane voltage and the recovery variable is proposed. This approach provides a simple and unified framework that accounts for numerous voltage threshold properties including adaptation, variability and time-dependent dynamics. In addition, neural features such as accommodation, inhibition-induced spike, and post-inhibitory (-excitatory) facilitation are the direct consequences of the existence of a threshold curve. Implications for neural modeling are also discussed

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Bayesian Integration in a Spiking Neural System for Sensorimotor Control

    Get PDF
    The brain continuously estimates the state of body and environment, with specific regions that are thought to act as Bayesian estimator, optimally integrating noisy and delayed sensory feedback with sensory predictions generated by the cerebellum. In control theory, Bayesian estimators are usually implemented using high-level representations. In this work, we designed a new spike-based computational model of a Bayesian estimator. The state estimator receives spiking activity from two neural populations encoding the sensory feedback and the cerebellar prediction, and it continuously computes the spike variability within each population as a reliability index of the signal these populations encode. The state estimator output encodes the current state estimate. We simulated a reaching task at different stages of cerebellar learning. The activity of the sensory feedback neurons encoded a noisy version of the trajectory after actual movement, with an almost constant intrapopulation spiking variability. Conversely, the activity of the cerebellar output neurons depended on the phase of the learning process. Before learning, they fired at their baseline not encoding any relevant information, and the variability was set to be higher than that of the sensory feedback (more reliable, albeit delayed). When learning was complete, their activity encoded the trajectory before the actual execution, providing an accurate sensory prediction; in this case, the variability was set to be lower than that of the sensory feedback. The state estimator model optimally integrated the neural activities of the afferent populations, so that the output state estimate was primarily driven by sensory feedback in prelearning and by the cerebellar prediction in postlearning. It was able to deal even with more complex scenarios, for example, by shifting the dominant source during the movement execution if information availability suddenly changed. The proposed tool will be a critical block within integrated spiking, brain-inspired control systems for simulations of sensorimotor tasks

    Transient dynamics and rhythm coordination of inferior olive spatio-temporal patterns

    Get PDF
    This Document is protected by copyright and was first published by Frontiers. All rights reserved. It is reproduced with permission.The inferior olive (IO) is a neural network belonging to the olivo-cerebellar system whose neurons are coupled with electrical synapses and display subthreshold oscillations and spiking activity. The IO is frequently proposed as the generator of timing signals to the cerebellum. Electrophysiological and imaging recordings show that the IO network generates complex spatio-temporal patterns. The generation and modulation of coherent spiking activity in the IO is one key issue in cerebellar research. In this work, we build a large scale IO network model of electrically coupled conductance-based neurons to study the emerging spatio-temporal patterns of its transient neuronal activity. Our modeling reproduces and helps to understand important phenomena observed in IO in vitro and in vivo experiments, and draws new predictions regarding the computational properties of this network and the associated cerebellar circuits. The main factors studied governing the collective dynamics of the IO network were: the degree of electrical coupling, the extent of the electrotonic connections, the presence of stimuli or regions with different excitability levels and the modulatory effect of an inhibitory loop (IL). The spatio-temporal patterns were analyzed using a discrete wavelet transform to provide a quantitative characterization. Our results show that the electrotonic coupling produces quasi-synchronized subthreshold oscillations over a wide dynamical range. The synchronized oscillatory activity plays the role of a timer for a coordinated representation of spiking rhythms with different frequencies. The encoding and coexistence of several coordinated rhythms is related to the different clusterization and coherence of transient spatio-temporal patterns in the network, where the spiking activity is commensurate with the quasi-synchronized subthreshold oscillations. In the presence of stimuli, different rhythms are encoded in the spiking activity of the IO neurons that nevertheless remains constrained to a commensurate value of the subthreshold frequency. The stimuli induced spatio-temporal patterns can reverberate for long periods, which contributes to the computational properties of the IO. We also show that the presence of regions with different excitability levels creates sinks and sources of coordinated activity which shape the propagation of spike wave fronts. These results can be generalized beyond IO studies, as the control of wave pattern propagation is a highly relevant problem in the context of normal and pathological states in neural systems (e.g., related to tremor, migraine, epilepsy) where the study of the modulation of activity sinks and sources can have a potential large impact.Roberto Latorre, Carlos Aguirre, and Pablo Varona were supported by MINECOTIN 2012-30883 and MikhailI. Rabinovich by ONRGrantN00014310205

    Sparse Gamma Rhythms Arising through Clustering in Adapting Neuronal Networks

    Get PDF
    Gamma rhythms (30–100 Hz) are an extensively studied synchronous brain state responsible for a number of sensory, memory, and motor processes. Experimental evidence suggests that fast-spiking interneurons are responsible for carrying the high frequency components of the rhythm, while regular-spiking pyramidal neurons fire sparsely. We propose that a combination of spike frequency adaptation and global inhibition may be responsible for this behavior. Excitatory neurons form several clusters that fire every few cycles of the fast oscillation. This is first shown in a detailed biophysical network model and then analyzed thoroughly in an idealized model. We exploit the fact that the timescale of adaptation is much slower than that of the other variables. Singular perturbation theory is used to derive an approximate periodic solution for a single spiking unit. This is then used to predict the relationship between the number of clusters arising spontaneously in the network as it relates to the adaptation time constant. We compare this to a complementary analysis that employs a weak coupling assumption to predict the first Fourier mode to destabilize from the incoherent state of an associated phase model as the external noise is reduced. Both approaches predict the same scaling of cluster number with respect to the adaptation time constant, which is corroborated in numerical simulations of the full system. Thus, we develop several testable predictions regarding the formation and characteristics of gamma rhythms with sparsely firing excitatory neurons
    • …
    corecore