14 research outputs found

    Self-organized Criticality in Neural Networks by Inhibitory and Excitatory Synaptic Plasticity

    Get PDF
    Neural networks show intrinsic ongoing activity even in the absence of information processing and task-driven activities. This spontaneous activity has been reported to have specific characteristics ranging from scale-free avalanches in microcircuits to the power-law decay of the power spectrum of oscillations in coarse-grained recordings of large populations of neurons. The emergence of scale-free activity and power-law distributions of observables has encouraged researchers to postulate that the neural system is operating near a continuous phase transition. At such a phase transition, changes in control parameters or the strength of the external input lead to a change in the macroscopic behavior of the system. On the other hand, at a critical point due to critical slowing down, the phenomenological mesoscopic modeling of the system becomes realizable. Two distinct types of phase transitions have been suggested as the operating point of the neural system, namely active-inactive and synchronous-asynchronous phase transitions. In contrast to normal phase transitions in which a fine-tuning of the control parameter(s) is required to bring the system to the critical point, neural systems should be supplemented with self-tuning mechanisms that adaptively adjust the system near to the critical point (or critical region) in the phase space. In this work, we introduce a self-organized critical model of the neural network. We consider dynamics of excitatory and inhibitory (EI) sparsely connected populations of spiking leaky integrate neurons with conductance-based synapses. Ignoring inhomogeneities and internal fluctuations, we first analyze the mean-field model. We choose the strength of the external excitatory input and the average strength of excitatory to excitatory synapses as control parameters of the model and analyze the bifurcation diagram of the mean-field equations. We focus on bifurcations at the low firing rate regime in which the quiescent state loses stability due to Saddle-node or Hopf bifurcations. In particular, at the Bogdanov-Takens (BT) bifurcation point which is the intersection of the Hopf bifurcation and Saddle-node bifurcation lines of the 2D dynamical system, the network shows avalanche dynamics with power-law avalanche size and duration distributions. This matches the characteristics of low firing spontaneous activity in the cortex. By linearizing gain functions and excitatory and inhibitory nullclines, we can approximate the location of the BT bifurcation point. This point in the control parameter phase space corresponds to the internal balance of excitation and inhibition and a slight excess of external excitatory input to the excitatory population. Due to the tight balance of average excitation and inhibition currents, the firing of the individual cells is fluctuation-driven. Around the BT point, the spiking of neurons is a Poisson process and the population average membrane potential of neurons is approximately at the middle of the operating interval [VRest,Vth][V_{Rest}, V_{th}]. Moreover, the EI network is close to both oscillatory and active-inactive phase transition regimes. Next, we consider self-tuning of the system at this critical point. The self-organizing parameter in our network is the balance of opposing forces of inhibitory and excitatory populations' activities and the self-organizing mechanisms are long-term synaptic plasticity and short-term depression of the synapses. The former tunes the overall strength of excitatory and inhibitory pathways to be close to a balanced regime of these currents and the latter which is based on the finite amount of resources in brain areas, act as an adaptive mechanism that tunes micro populations of neurons subjected to fluctuating external inputs to attain the balance in a wider range of external input strengths. Using the Poisson firing assumption, we propose a microscopic Markovian model which captures the internal fluctuations in the network due to the finite size and matches the macroscopic mean-field equation by coarse-graining. Near the critical point, a phenomenological mesoscopic model for excitatory and inhibitory fields of activity is possible due to the time scale separation of slowly changing variables and fast degrees of freedom. We will show that the mesoscopic model corresponding to the neural field model near the local Bogdanov-Takens bifurcation point matches Langevin's description of the directed percolation process. Tuning the system at the critical point can be achieved by coupling fast population dynamics with slow adaptive gain and synaptic weight dynamics, which make the system wander around the phase transition point. Therefore, by introducing short-term and long-term synaptic plasticity, we have proposed a self-organized critical stochastic neural field model.:1. Introduction 1.1. Scale-free Spontaneous Activity 1.1.1. Nested Oscillations in the Macro-scale Collective Activity 1.1.2. Up and Down States Transitions 1.1.3. Avalanches in Local Neuronal Populations 1.2. Criticality and Self-organized Criticality in Systems out of Equilibrium 1.2.1. Sandpile Models 1.2.2. Directed Percolation 1.3. Critical Neural Models 1.3.1. Self-Organizing Neural Automata 1.3.2. Criticality in the Mesoscopic Models of Cortical Activity 1.4. Balance of Inhibition and Excitation 1.5. Functional Benefits of Being in the Critical State 1.6. Arguments Against the Critical State of the Brain 1.7. Organization of the Current Work 2. Single Neuron Model 2.1. Impulse Response of the Neuron 2.2. Response of the Neuron to the Constant Input 2.3. Response of the Neuron to the Poisson Input 2.3.1. Potential Distribution of a Neuron Receiving Poisson Input 2.3.2. Firing Rate and Interspike intervals’ CV Near the Threshold 2.3.3. Linear Poisson Neuron Approximation 3. Interconnected Homogeneous Population of Excitatory and Inhibitory Neurons 3.1. Linearized Nullclines and Different Dynamic Regimes 3.2. Logistic Function Approximation of Gain Functions 3.3. Dynamics Near the BT Bifurcation Point 3.4. Avalanches in the Region Close to the BT Point 3.5. Stability Analysis of the Fixed Points in the Linear Regime 3.6. Characteristics of Avalanches 4. Long Term and Short Term Synaptic Plasticity rules Tune the EI Population Close to the BT Bifurcation Point 4.1. Long Term Synaptic Plasticity by STDP Tunes Synaptic Weights Close to the Balanced State 4.2. Short-term plasticity and Up-Down states transition 5. Interconnected network of EI populations: Wilson-Cowan Neural Field Model 6. Stochastic Neural Field 6.1. Finite size fluctuations in a single EI population 6.2. Stochastic Neural Field with a Tuning Mechanism to the Critical State 7. Conclusio

    25th Annual Computational Neuroscience Meeting: CNS-2016

    Get PDF
    Abstracts of the 25th Annual Computational Neuroscience Meeting: CNS-2016 Seogwipo City, Jeju-do, South Korea. 2–7 July 201

    Topology Reconstruction of Dynamical Networks via Constrained Lyapunov Equations

    Get PDF
    The network structure (or topology) of a dynamical network is often unavailable or uncertain. Hence, we consider the problem of network reconstruction. Network reconstruction aims at inferring the topology of a dynamical network using measurements obtained from the network. In this technical note we define the notion of solvability of the network reconstruction problem. Subsequently, we provide necessary and sufficient conditions under which the network reconstruction problem is solvable. Finally, using constrained Lyapunov equations, we establish novel network reconstruction algorithms, applicable to general dynamical networks. We also provide specialized algorithms for specific network dynamics, such as the well-known consensus and adjacency dynamics.Comment: 8 page

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore