1,696 research outputs found

    Detecting and Estimating Signals in Noisy Cable Structures, II: Information Theoretical Analysis

    Get PDF
    This is the second in a series of articles that seek to recast classical single-neuron biophysics in information-theoretical terms. Classical cable theory focuses on analyzing the voltage or current attenuation of a synaptic signal as it propagates from its dendritic input location to the spike initiation zone. On the other hand, we are interested in analyzing the amount of information lost about the signal in this process due to the presence of various noise sources distributed throughout the neuronal membrane. We use a stochastic version of the linear one-dimensional cable equation to derive closed-form expressions for the second-order moments of the fluctuations of the membrane potential associated with different membrane current noise sources: thermal noise, noise due to the random opening and closing of sodium and potassium channels, and noise due to the presence of “spontaneous” synaptic input. We consider two different scenarios. In the signal estimation paradigm, the time course of the membrane potential at a location on the cable is used to reconstruct the detailed time course of a random, band-limited current injected some distance away. Estimation performance is characterized in terms of the coding fraction and the mutual information. In the signal detection paradigm, the membrane potential is used to determine whether a distant synaptic event occurred within a given observation interval. In the light of our analytical results, we speculate that the length of weakly active apical dendrites might be limited by the information loss due to the accumulated noise between distal synaptic input sites and the soma and that the presence of dendritic nonlinearities probably serves to increase dendritic information transfer

    Self-organized Criticality in Neural Networks by Inhibitory and Excitatory Synaptic Plasticity

    Get PDF
    Neural networks show intrinsic ongoing activity even in the absence of information processing and task-driven activities. This spontaneous activity has been reported to have specific characteristics ranging from scale-free avalanches in microcircuits to the power-law decay of the power spectrum of oscillations in coarse-grained recordings of large populations of neurons. The emergence of scale-free activity and power-law distributions of observables has encouraged researchers to postulate that the neural system is operating near a continuous phase transition. At such a phase transition, changes in control parameters or the strength of the external input lead to a change in the macroscopic behavior of the system. On the other hand, at a critical point due to critical slowing down, the phenomenological mesoscopic modeling of the system becomes realizable. Two distinct types of phase transitions have been suggested as the operating point of the neural system, namely active-inactive and synchronous-asynchronous phase transitions. In contrast to normal phase transitions in which a fine-tuning of the control parameter(s) is required to bring the system to the critical point, neural systems should be supplemented with self-tuning mechanisms that adaptively adjust the system near to the critical point (or critical region) in the phase space. In this work, we introduce a self-organized critical model of the neural network. We consider dynamics of excitatory and inhibitory (EI) sparsely connected populations of spiking leaky integrate neurons with conductance-based synapses. Ignoring inhomogeneities and internal fluctuations, we first analyze the mean-field model. We choose the strength of the external excitatory input and the average strength of excitatory to excitatory synapses as control parameters of the model and analyze the bifurcation diagram of the mean-field equations. We focus on bifurcations at the low firing rate regime in which the quiescent state loses stability due to Saddle-node or Hopf bifurcations. In particular, at the Bogdanov-Takens (BT) bifurcation point which is the intersection of the Hopf bifurcation and Saddle-node bifurcation lines of the 2D dynamical system, the network shows avalanche dynamics with power-law avalanche size and duration distributions. This matches the characteristics of low firing spontaneous activity in the cortex. By linearizing gain functions and excitatory and inhibitory nullclines, we can approximate the location of the BT bifurcation point. This point in the control parameter phase space corresponds to the internal balance of excitation and inhibition and a slight excess of external excitatory input to the excitatory population. Due to the tight balance of average excitation and inhibition currents, the firing of the individual cells is fluctuation-driven. Around the BT point, the spiking of neurons is a Poisson process and the population average membrane potential of neurons is approximately at the middle of the operating interval [VRest,Vth][V_{Rest}, V_{th}]. Moreover, the EI network is close to both oscillatory and active-inactive phase transition regimes. Next, we consider self-tuning of the system at this critical point. The self-organizing parameter in our network is the balance of opposing forces of inhibitory and excitatory populations' activities and the self-organizing mechanisms are long-term synaptic plasticity and short-term depression of the synapses. The former tunes the overall strength of excitatory and inhibitory pathways to be close to a balanced regime of these currents and the latter which is based on the finite amount of resources in brain areas, act as an adaptive mechanism that tunes micro populations of neurons subjected to fluctuating external inputs to attain the balance in a wider range of external input strengths. Using the Poisson firing assumption, we propose a microscopic Markovian model which captures the internal fluctuations in the network due to the finite size and matches the macroscopic mean-field equation by coarse-graining. Near the critical point, a phenomenological mesoscopic model for excitatory and inhibitory fields of activity is possible due to the time scale separation of slowly changing variables and fast degrees of freedom. We will show that the mesoscopic model corresponding to the neural field model near the local Bogdanov-Takens bifurcation point matches Langevin's description of the directed percolation process. Tuning the system at the critical point can be achieved by coupling fast population dynamics with slow adaptive gain and synaptic weight dynamics, which make the system wander around the phase transition point. Therefore, by introducing short-term and long-term synaptic plasticity, we have proposed a self-organized critical stochastic neural field model.:1. Introduction 1.1. Scale-free Spontaneous Activity 1.1.1. Nested Oscillations in the Macro-scale Collective Activity 1.1.2. Up and Down States Transitions 1.1.3. Avalanches in Local Neuronal Populations 1.2. Criticality and Self-organized Criticality in Systems out of Equilibrium 1.2.1. Sandpile Models 1.2.2. Directed Percolation 1.3. Critical Neural Models 1.3.1. Self-Organizing Neural Automata 1.3.2. Criticality in the Mesoscopic Models of Cortical Activity 1.4. Balance of Inhibition and Excitation 1.5. Functional Benefits of Being in the Critical State 1.6. Arguments Against the Critical State of the Brain 1.7. Organization of the Current Work 2. Single Neuron Model 2.1. Impulse Response of the Neuron 2.2. Response of the Neuron to the Constant Input 2.3. Response of the Neuron to the Poisson Input 2.3.1. Potential Distribution of a Neuron Receiving Poisson Input 2.3.2. Firing Rate and Interspike intervals’ CV Near the Threshold 2.3.3. Linear Poisson Neuron Approximation 3. Interconnected Homogeneous Population of Excitatory and Inhibitory Neurons 3.1. Linearized Nullclines and Different Dynamic Regimes 3.2. Logistic Function Approximation of Gain Functions 3.3. Dynamics Near the BT Bifurcation Point 3.4. Avalanches in the Region Close to the BT Point 3.5. Stability Analysis of the Fixed Points in the Linear Regime 3.6. Characteristics of Avalanches 4. Long Term and Short Term Synaptic Plasticity rules Tune the EI Population Close to the BT Bifurcation Point 4.1. Long Term Synaptic Plasticity by STDP Tunes Synaptic Weights Close to the Balanced State 4.2. Short-term plasticity and Up-Down states transition 5. Interconnected network of EI populations: Wilson-Cowan Neural Field Model 6. Stochastic Neural Field 6.1. Finite size fluctuations in a single EI population 6.2. Stochastic Neural Field with a Tuning Mechanism to the Critical State 7. Conclusio

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Information efficacy of a dynamic synapse

    Get PDF

    Limits and dynamics of randomly connected neuronal networks

    Full text link
    Networks of the brain are composed of a very large number of neurons connected through a random graph and interacting after random delays that both depend on the anatomical distance between cells. In order to comprehend the role of these random architectures on the dynamics of such networks, we analyze the mesoscopic and macroscopic limits of networks with random correlated connectivity weights and delays. We address both averaged and quenched limits, and show propagation of chaos and convergence to a complex integral McKean-Vlasov equations with distributed delays. We then instantiate a completely solvable model illustrating the role of such random architectures in the emerging macroscopic activity. We particularly focus on the role of connectivity levels in the emergence of periodic solutions
    corecore