35,653 research outputs found

    Event-driven simulation of spiking neurons with stochastic dynamics

    Get PDF
    We present a new technique, based on a proposed event-based strategy (Mattia & Del Giudice, 2000), for efficiently simulating large networks of simple model neurons. The strategy was based on the fact that interactions among neurons occur by means of events that are well localized in time (the action potentials) and relatively rare. In the interval between two of these events, the state variables associated with a model neuron or a synapse evolved deterministically and in a predictable way. Here, we extend the event-driven simulation strategy to the case in which the dynamics of the state variables in the inter-event intervals are stochastic. This extension captures both the situation in which the simulated neurons are inherently noisy and the case in which they are embedded in a very large network and receive a huge number of random synaptic inputs. We show how to effectively include the impact of large background populations into neuronal dynamics by means of the numerical evaluation of the statistical properties of single-model neurons under random current injection. The new simulation strategy allows the study of networks of interacting neurons with an arbitrary number of external afferents and inherent stochastic dynamics

    Computationally efficient simulation of extracellular recordings with multielectrode arrays

    Get PDF
    In this paper we present a novel, computationally and memory efficient way of modeling the spatial dependency of measured spike waveforms in extracellular recordings of neuronal activity. We use compartment models to simulate action potentials in neurons and then apply linear source approximation to calculate the resulting extracellular spike waveform on a three dimensional grid of measurement points surrounding the neurons. We then apply traditional compression techniques and polynomial fitting to obtain a compact mathematical description of the spatial dependency of the spike waveform. We show how the compressed models can be used to efficiently calculate the spike waveform from a neuron in a large set of measurement points simultaneously and how the same procedure can be inversed to calculate the spike waveforms from a large set of neurons at a single electrode position. The compressed models have been implemented into an object oriented simulation tool that allows the simulation of multielectrode recordings that capture the variations in spike waveforms that are expected to arise between the different recording channels. The computational simplicity of our approach allows the simulation of a multi-channel recording of signals from large populations of neurons while simulating the activity of every neuron with a high level of detail. We have validated our compressed models against the original data obtained from the compartment models and we have shown, by example, how the simulation approach presented here can be used to quantify the performance in spike sorting as a function of electrode position

    Modeling and Computational Framework for the Specification and Simulation of Large-scale Spiking Neural Networks

    Get PDF
    Recurrently connected neural networks, in which synaptic connections between neurons can form directed cycles, have been used extensively in the literature to describe various neurophysiological phenomena, such as coordinate transformations during sensorimotor integration. Due to the directed cycles that can exist in recurrent networks, there is no well-known way to a priori specify synaptic weights to elicit neuron spiking responses to stimuli based on available neurophysiology. Using a common mean field assumption, that synaptic inputs are uncorrelated for sufficiently large populations of neurons, we show that the connection topology and a neuron\u27s response characteristics can be decoupled. This assumption allows specification of neuron steady-state responses independent of the connection topology. Specification of neuron responses necessitates the creation of a novel simulator (computational framework) which allows modeling of large populations of connected spiking neurons. We describe the implementation of a spike-based computational framework, designed to take advantage of high performance computing architectures when available. We show that performance of the computational framework is improved using multiple message passing processes for large populations of neurons, resulting in a worst-case linear relationship between the number of neurons and the time required to complete a simulation. Using the computational framework and the ability to specify neuron response characteristics independent of synaptic weights, we systematically investigate the effects of Hebbian learning on the hemodynamic response. Changes in the magnitude of the hemodynamic responses of neural populations are assessed using a forward model that relates population synaptic currents to the blood oxygen dependant (BOLD) response via local field potentials. We show that the magnitude of the hemodynamic response is not a accurate indicator of underlying spiking activity for all network topologies. Instead, we note that large changes in the aggregate response of the population (\u3e50%) can results in a decrease in the overall magnitude of the BOLD signal. We hypothesize that the hemodynamic response magnitude changed due to fluctuations in the balance of excitatory and inhibitory inputs in neural subpopulations. These results have important implications for mean-field models, suggesting that the underlying excitatory/inhibitory neural dynamics within a population may need to be taken into account to accurately predict hemodynamic responses

    Neural Field Models: A mathematical overview and unifying framework

    Full text link
    Rhythmic electrical activity in the brain emerges from regular non-trivial interactions between millions of neurons. Neurons are intricate cellular structures that transmit excitatory (or inhibitory) signals to other neurons, often non-locally, depending on the graded input from other neurons. Often this requires extensive detail to model mathematically, which poses several issues in modelling large systems beyond clusters of neurons, such as the whole brain. Approaching large populations of neurons with interconnected constituent single-neuron models results in an accumulation of exponentially many complexities, rendering a realistic simulation that does not permit mathematical tractability and obfuscates the primary interactions required for emergent electrodynamical patterns in brain rhythms. A statistical mechanics approach with non-local interactions may circumvent these issues while maintaining mathematically tractability. Neural field theory is a population-level approach to modelling large sections of neural tissue based on these principles. Herein we provide a review of key stages of the history and development of neural field theory and contemporary uses of this branch of mathematical neuroscience. We elucidate a mathematical framework in which neural field models can be derived, highlighting the many significant inherited assumptions that exist in the current literature, so that their validity may be considered in light of further developments in both mathematical and experimental neuroscience.Comment: 55 pages, 10 figures, 2 table

    pMIIND-an MPI-based population density simulation framework

    Get PDF
    MIIND [1] is the first publicly available implementation of population density algorithms. Like neural mass models, they model at the population level, rather than that of individual neurons, but unlike neural mass models, they consider the full neuronal state space. The central concept is a population density, a probability distribution function that represents the probability of a neuron being in a certain part of state space. Neurons will move through state space by their own intrinsic dynamics or driven by synaptic input. When individual spikes do not matter but only population averaged quantities are considered, these methods outperform direct simulations using neuron point models by a factor 10 or more, whilst (at the population level) producing identical results to simulations of spiking neurons. This is in general not true for neural mass models. Population density methods also relate closely to analytic evaluations of population dynamics. The evolution of the density function is given by a partial differential equation (PDE). In [3] a generic method was presented for solving this equation efficiently, both for small synaptic efficacies (diffusion limit; the PDE becomes a Fokker-Planck equation) and for large ones (finite jumps). We demonstrated that for leaky-integrate-and-fire (LIF) neurons this method reproduces analytic results [1] and uses of the order of 0.2 s to model 1s simulation time of infinitely large population of spiking LIF neurons. We now have developed this method to apply to any 1D neuron point model [3], not just LIF neurons and demonstrated the technique on quadratic-integrate-and-fire neurons. We are therefore in the position to model large heterogeneous networks of spiking neurons very efficiently. A potential bottleneck is MIIND's serial simulation loop. We developed an MPI implementation of MIIND's central simulation loop starting from a fresh code base, and addressed serialization, which is now done at the level of individual cores. Central assumption in the set up is that firing rates are communicated, not individual spikes, so bandwidth requirements are low. Latency is potentially a problem, but with the use of latency hiding techniques good scalability for up to 64 cores has been achieved ondedicated clusters. The scalability was verified with a simple model of cortical waves in a hexagonal network of populations with balanced excitation-inhibition. pMIIND is available on Sourceforge, through its git repository: git://http://miind.sourceforge.net A CMake-based install procedure is provided. Since pMIIND is set up as a C++ framework, it is possible to define one's own algorithms and still take advantage of the MPI-based simulation loop

    Biophysically grounded mean-field models of neural populations under electrical stimulation

    Full text link
    Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams reflecting the network's state reveal asynchronous up- and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description.We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.Comment: A Python package with an implementation of the AdEx mean-field model can be found at https://github.com/neurolib-dev/neurolib - code for simulation and data analysis can be found at https://github.com/caglarcakan/stimulus_neural_population

    The state of MIIND

    Get PDF
    MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson–Cowan and Ornstein–Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models

    Integration of continuous-time dynamics in a spiking neural network simulator

    Full text link
    Contemporary modeling approaches to the dynamics of neural networks consider two main classes of models: biologically grounded spiking neurons and functionally inspired rate-based units. The unified simulation framework presented here supports the combination of the two for multi-scale modeling approaches, the quantitative validation of mean-field approaches by spiking network simulations, and an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most efficient spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. We further demonstrate the broad applicability of the framework by considering various examples from the literature ranging from random networks to neural field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation
    • …
    corecore