7 research outputs found

    Biologically realistic mean-field models of conductance-based networks of spiking neurons with adaptation

    Get PDF
    International audienceAccurate population models are needed to build very large scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlin-ear properties are involved such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of Adaptive Exponential Integrate and Fire excitatory and inhibitory neurons. Using a Master Equation formalism , we derive a mean-field model of such networks and compare it to the full network dynamics. The mean-field model is capable to correctly predict the average spontaneous activity levels in asynchronous irregular regimes similar to in vivo activity. It also captures the transient temporal response of the network to complex external inputs. Finally, the mean-field model is also able to quantitatively describe regimes where high and low activity states alternate (UP-DOWN state dynamics), leading to slow oscillations. We conclude that such mean-field models are "biologically realistic" in the sense that they can capture both spontaneous and evoked activity, and they naturally appear as candidates to build very large scale models involving multiple brain areas

    A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements

    Full text link
    The computational properties of neural systems are often thought to be implemented in terms of their network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit (MSU) recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a state space representation of the dynamics, but would wish to have access to its governing equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, the approach is applied to MSU recordings from the rodent anterior cingulate cortex obtained during performance of a classical working memory task, delayed alternation. A model with 5 states turned out to be sufficient to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover the relevant dynamics underlying observed neuronal time series, and directly link them to computational properties

    Nonsmooth Bifurcations of Mean Field Systems of Two-Dimensional Integrate and Fire Neurons

    Get PDF
    First Published in SIAM Journal on Applied Dynamical Systems in 15[1], 2016 published by the Society for Industrial and Applied Mathematics (SIAM) Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.Mean field systems have recently been derived that adequately predict the behaviors of large networks of coupled integrate-and-fire neurons [W. Nicola and S.A. Campbell, J. Comput. Neurosci., 35 (2013), pp. 87-108]. The mean field system for a network of neurons with spike frequency adaptation is typically a pair of differential equations for the mean adaptation and synaptic gating variable of the network. These differential equations are nonsmooth, and, in particular, are piecewise smooth continuous (PWSC). Here, we analyze the smooth and nonsmooth bifurcation structure of these equations and show that the system is organized around a pair of co-dimension-two bifurcations that involve, respectively, the collision between a Hopf equilibrium point and a switching manifold, and a saddle-node equilibrium point and a switching manifold. These two co-dimension-two bifurcations can coalesce into a co-dimension-three nonsmooth bifurcation. As the mean field system we study is a nongeneric piecewise smooth continuous system, we discuss possible regularizations of this system and how the bifurcations which occur are related to nonsmooth bifurcations displayed by generic PWSC systems

    Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

    Get PDF
    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50 -- 2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics like finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly simulate a model of a local cortical microcircuit consisting of eight neuron types. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations

    Bifurcation Analysis of Large Networks of Neurons

    Get PDF
    The human brain contains on the order of a hundred billion neurons, each with several thousand synaptic connections. Computational neuroscience has successfully modeled both the individual neurons as various types of oscillators, in addition to the synaptic coupling between the neurons. However, employing the individual neuronal models as a large coupled network on the scale of the human brain would require massive computational and financial resources, and yet is the current undertaking of several research groups. Even if one were to successfully model such a complicated system of coupled differential equations, aside from brute force numerical simulations, little insight may be gained into how the human brain solves problems or performs tasks. Here, we introduce a tool that reduces large networks of coupled neurons to a much smaller set of differential equations that governs key statistics for the network as a whole, as opposed to tracking the individual dynamics of neurons and their connections. This approach is typically referred to as a mean-field system. As the mean-field system is derived from the original network of neurons, it is predictive for the behavior of the network as a whole and the parameters or distributions of parameters that appear in the mean-field system are identical to those of the original network. As such, bifurcation analysis is predictive for the behavior of the original network and predicts where in the parameter space the network transitions from one behavior to another. Additionally, here we show how networks of neurons can be constructed with a mean-field or macroscopic behavior that is prescribed. This occurs through an analytic extension of the Neural Engineering Framework (NEF). This can be thought of as an inverse mean-field approach, where the networks are constructed to obey prescribed dynamics as opposed to deriving the macroscopic dynamics from an underlying network. Thus, the work done here analyzes neuronal networks through both top-down and bottom-up approaches
    corecore