2,330 research outputs found

    Stability Analysis of Asynchronous States in Neuronal Networks with Conductance-Based Inhibition

    Get PDF
    Oscillations in networks of inhibitory interneurons have been reported at various sites of the brain and are thought to play a fundamental role in neuronal processing. This Letter provides a self-contained analytical framework that allows numerically efficient calculations of the population activity of a network of conductance-based integrate-and-fire neurons that are coupled through inhibitory synapses. Based on a normalization equation this Letter introduces a novel stability criterion for a network state of asynchronous activity and discusses its perturbations. The analysis shows that, although often neglected, the reversal potential of synaptic inhibition has a strong influence on the stability as well as the frequency of network oscillations

    How adaptation currents change threshold, gain and variability of neuronal spiking

    Get PDF
    Many types of neurons exhibit spike rate adaptation, mediated by intrinsic slow K+\mathrm{K}^+-currents, which effectively inhibit neuronal responses. How these adaptation currents change the relationship between in-vivo like fluctuating synaptic input, spike rate output and the spike train statistics, however, is not well understood. In this computational study we show that an adaptation current which primarily depends on the subthreshold membrane voltage changes the neuronal input-output relationship (I-O curve) subtractively, thereby increasing the response threshold. A spike-dependent adaptation current alters the I-O curve divisively, thus reducing the response gain. Both types of adaptation currents naturally increase the mean inter-spike interval (ISI), but they can affect ISI variability in opposite ways. A subthreshold current always causes an increase of variability while a spike-triggered current decreases high variability caused by fluctuation-dominated inputs and increases low variability when the average input is large. The effects on I-O curves match those caused by synaptic inhibition in networks with asynchronous irregular activity, for which we find subtractive and divisive changes caused by external and recurrent inhibition, respectively. Synaptic inhibition, however, always increases the ISI variability. We analytically derive expressions for the I-O curve and ISI variability, which demonstrate the robustness of our results. Furthermore, we show how the biophysical parameters of slow K+\mathrm{K}^+-conductances contribute to the two different types of adaptation currents and find that Ca2+\mathrm{Ca}^{2+}-activated K+\mathrm{K}^+-currents are effectively captured by a simple spike-dependent description, while muscarine-sensitive or Na+\mathrm{Na}^+-activated K+\mathrm{K}^+-currents show a dominant subthreshold component.Comment: 20 pages, 8 figures; Journal of Neurophysiology (in press

    Conductance-Based Refractory Density Approach for a Population of Bursting Neurons

    Get PDF
    The conductance-based refractory density (CBRD) approach is a parsimonious mathematical-computational framework for modeling interact- ing populations of regular spiking neurons, which, however, has not been yet extended for a population of bursting neurons. The canonical CBRD method allows to describe the firing activity of a statistical ensemble of uncoupled Hodgkin-Huxley-like neurons (differentiated by noise) and has demonstrated its validity against experimental data. The present manuscript generalises the CBRD for a population of bursting neurons; however, in this pilot computational study we consider the simplest setting in which each individual neuron is governed by a piecewise linear bursting dynamics. The resulting popula- tion model makes use of slow-fast analysis, which leads to a novel method- ology that combines CBRD with the theory of multiple timescale dynamics. The main prospect is that it opens novel avenues for mathematical explo- rations, as well as, the derivation of more sophisticated population activity from Hodgkin-Huxley-like bursting neurons, which will allow to capture the activity of synchronised bursting activity in hyper-excitable brain states (e.g. onset of epilepsy).Russian Science Foundation grant (project 16-15- 10201) Spanish grant MINECO-FEDER-UE MTM-2015-71509-C2-2-R Catalan Grant number 2017SGR104

    Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks with chemical and electric synapses

    Get PDF
    We investigate the effect of electric synapses (gap junctions) on collective neuronal dynamics and spike statistics in a conductance-based Integrate-and-Fire neural network, driven by a Brownian noise, where conductances depend upon spike history. We compute explicitly the time evolution operator and show that, given the spike-history of the network and the membrane potentials at a given time, the further dynamical evolution can be written in a closed form. We show that spike train statistics is described by a Gibbs distribution whose potential can be approximated with an explicit formula, when the noise is weak. This potential form encompasses existing models for spike trains statistics analysis such as maximum entropy models or Generalized Linear Models (GLM). We also discuss the different types of correlations: those induced by a shared stimulus and those induced by neurons interactions.Comment: 42 pages, 1 figure, submitte

    A mean-field model for conductance-based networks of adaptive exponential integrate-and-fire neurons

    Full text link
    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at mesoscopic scales. Since VSDi signals report the average membrane potential, it seems natural to use a mean-field formalism to model such signals. Here, we investigate a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. The AdEx model can capture the spiking response of different cell types, such as regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the mean-field model. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model accurately predicts the response time course of the population. One notable exception was that the "tail" of the response at long times was not well predicted, because the mean-field does not include adaptation mechanisms. We conclude that the Master Equation formalism can yield mean-field models that predict well the behavior of nonlinear networks with conductance-based interactions and various electrophysiolgical properties, and should be a good candidate to model VSDi signals where both excitatory and inhibitory neurons contribute.Comment: 21 pages, 7 figure

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Stochastic dynamics of a finite-size spiking neural network

    Full text link
    We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network only depends on the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean-field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes

    The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study

    Get PDF
    High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...Comment: 20 pages, 10 figures, supplement
    • …
    corecore