12 research outputs found
Jensenâs force and the statistical mechanics of cortical asynchronous states
Cortical networks are shaped by the combined action of excitatory and inhibitory interactions. Among
other important functions, inhibition solves the problem of the all-or-none type of response that comes
about in purely excitatory networks, allowing the network to operate in regimes of moderate or low
activity, between quiescent and saturated regimes. Here, we elucidate a noise-induced effect that we
call âJensenâs forceâ âstemming from the combined effect of excitation/inhibition balance and network
sparsityâ which is responsible for generating a phase of self-sustained low activity in excitationinhibition
networks. The uncovered phase reproduces the main empirically-observed features of cortical
networks in the so-called asynchronous state, characterized by low, un-correlated and highly-irregular
activity. The parsimonious model analyzed here allows us to resolve a number of long-standing issues,
such as proving that activity can be self-sustained even in the complete absence of external stimuli or
driving. The simplicity of our approach allows for a deep understanding of asynchronous states and of
the phase transitions to other standard phases it exhibits, opening the door to reconcile, asynchronousstate
and critical-state hypotheses, putting them within a unified framework. We argue that Jensenâs
forces are measurable experimentally and might be relevant in contexts beyond neuroscience.The study is supported by Fondazione Cariparma, under TeachInParma Project. MAM thanks the Spanish
Ministry of Science and the Agencia Española de Investigación (AEI) for financial support under grant
FIS2017-84256-P (European Regional Development Fund (ERDF)) as well as the Consejera de Conocimiento,
InvestigaciĂłn y Universidad, Junta de Andaluca and European Regional Development Fund (ERDF), ref.
SOMM17/6105/UGR. V.B. and R.B. acknowledge funding from the INFN BIOPHYS projec
A Multiple-Plasticity Spiking Neural Network Embedded in a Closed-Loop Control System to Model Cerebellar Pathologies
The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses
Random networks of integrate-and-fire neurons with strong current-based synapses can, unlike previously believed, assume stable states of sustained asynchronous and irregular firing, even without external random background or pacemaker neurons. We analyze the mechanisms underlying the emergence, lifetime and irregularity of such self-sustained activity states. We first demonstrate how the competition between the mean and the variance of the synaptic input leads to a non-monotonic firing-rate transfer in the network. Thus, by increasing the synaptic coupling strength, the system can become bistable: In addition to the quiescent state, a second stable fixed-point at moderate firing rates can emerge by a saddle-node bifurcation. Inherently generated fluctuations of the population firing rate around this non-trivial fixed-point can trigger transitions into the quiescent state. Hence, the trade-off between the magnitude of the population-rate fluctuations and the size of the basin of attraction of the non-trivial rate fixed-point determines the onset and the lifetime of self-sustained activity states. During self-sustained activity, individual neuronal activity is moreover highly irregular, switching between long periods of low firing rate to short burst-like states. We show that this is an effect of the strong synaptic weights and the finite time constant of synaptic and neuronal integration, and can actually serve to stabilize the self-sustained state
Biophysically grounded mean-field models of neural populations under electrical stimulation
Electrical stimulation of neural systems is a key tool for understanding
neural dynamics and ultimately for developing clinical treatments. Many
applications of electrical stimulation affect large populations of neurons.
However, computational models of large networks of spiking neurons are
inherently hard to simulate and analyze. We evaluate a reduced mean-field model
of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx)
neurons which can be used to efficiently study the effects of electrical
stimulation on large neural populations. The rich dynamical properties of this
basic cortical model are described in detail and validated using large network
simulations. Bifurcation diagrams reflecting the network's state reveal
asynchronous up- and down-states, bistable regimes, and oscillatory regions
corresponding to fast excitation-inhibition and slow excitation-adaptation
feedback loops. The biophysical parameters of the AdEx neuron can be coupled to
an electric field with realistic field strengths which then can be propagated
up to the population description.We show how on the edge of bifurcation, direct
electrical inputs cause network state transitions, such as turning on and off
oscillations of the population rate. Oscillatory input can frequency-entrain
and phase-lock endogenous oscillations. Relatively weak electric field
strengths on the order of 1 V/m are able to produce these effects, indicating
that field effects are strongly amplified in the network. The effects of
time-varying external stimulation are well-predicted by the mean-field model,
further underpinning the utility of low-dimensional neural mass models.Comment: A Python package with an implementation of the AdEx mean-field model
can be found at https://github.com/neurolib-dev/neurolib - code for
simulation and data analysis can be found at
https://github.com/caglarcakan/stimulus_neural_population
Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease
The impairment of cognitive function in Alzheimer's is clearly correlated to
synapse loss. However, the mechanisms underlying this correlation are only
poorly understood. Here, we investigate how the loss of excitatory synapses in
sparsely connected random networks of spiking excitatory and inhibitory neurons
alters their dynamical characteristics. Beyond the effects on the network's
activity statistics, we find that the loss of excitatory synapses on excitatory
neurons shifts the network dynamic towards the stable regime. The decrease in
sensitivity to small perturbations to time varying input can be considered as
an indication of a reduction of computational capacity. A full recovery of the
network performance can be achieved by firing rate homeostasis, here
implemented by an up-scaling of the remaining excitatory-excitatory synapses.
By analysing the stability of the linearized network dynamics, we explain how
homeostasis can simultaneously maintain the network's firing rate and
sensitivity to small perturbations
Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses
Random networks of integrate-and-fire neurons with strong current-based synapse scan, unlike previously believed, assume stable states of sustained asynchronous and irregular firing, even without external random background or pacemaker neurons. We analyze the mechanisms underlying the emergence, lifetime and irregularity of such self-sustained activity states. We first demonstrate how the competition between the mean and the variance of the synaptic input leads to a non-monotonic firing-rate transfer in the network. Thus, by increasing the synaptic coupling strength, the system can become bistable: In addition to the quiescent state, a second stable fixed-point at moderate firing rates can emerge by a saddle-node bifurcation. Inherently generated fluctuations of the population firing rate around this non-trivial fixed-point can trigger transitions into the quiescent state. Hence, the trade-off between the magnitude of the population-rate fluctuations and the size of the basin of attraction of the non-trivial rate fixed-point determines the onset and the lifetime of self-sustained activity states. During self-sustained activity, individual neuronal activity is more overhighly irregular, switching between long periods of low firing rate to short burst like states. We show that this is an effect of the strong synaptic weights and the finite time constant of synaptic and neuronal integration, and can actually serve to stabilize the self-sustained state
Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses
Random networks of integrate-and-fire neurons with strong current-based synapses can, unlike previously believed, assume stable states of sustained asynchronous and irregular firing, even without external random background or pacemaker neurons.We analyze the mechanisms underlying the emergence, lifetime and irregularity of such self-sustained activity states.We first demonstrate how the competition between the mean and the variance of the synaptic input leads to a non-monotonic firing-rate transfer in the network. Thus, by increasing the synaptic coupling strength, the system can become bistable: In addition to the quiescent state, a second stable fixed-point at moderate firing rates can emerge by a saddle-node bifurcation. Inherently generated fluctuations of the population firing rate around this non-trivial fixed-point can trigger transitions into the quiescent state.Hence, the trade-off between the magnitude of the population-rate fluctuations and the size of the basin of attraction of the nontrivial rate fixed-point determines the onset and the lifetime of self-sustained activity states. During self-sustained activity, individual neuronal activity is moreover highly irregular, switching between long periods of low firing rate to short burst-like states.We show that this is an effect of the strong synaptic weights and the finite time constant of synaptic and neuronal integration, and can actually serve to stabilize the self-sustained state
Criteria on Balance, Stability, and Excitability in Cortical Networks for Constraining Computational Models
During ongoing and Up state activity, cortical circuits manifest a set of dynamical features that are conserved across these states. The present work systematizes these phenomena by three notions: excitability, the ability to sustain activity without external input; balance, precise coordination of excitatory and inhibitory neuronal inputs; and stability, maintenance of activity at a steady level. Slice preparations exhibiting Up states demonstrate that balanced activity can be maintained by small local circuits. While computational models of cortical circuits have included different combinations of excitability, balance, and stability, they have done so without a systematic quantitative comparison with experimental data. Our study provides quantitative criteria for this purpose, by analyzing in-vitro and in-vivo neuronal activity and characterizing the dynamics on the neuronal and population levels. The criteria are defined with a tolerance that allows for differences between experiments, yet are sufficient to capture commonalities between persistently depolarized cortical network states and to help validate computational models of cortex. As test cases for the derived set of criteria, we analyze three widely used models of cortical circuits and find that each model possesses some of the experimentally observed features, but none satisfies all criteria simultaneously, showing that the criteria are able to identify weak spots in computational models. The criteria described here form a starting point for the systematic validation of cortical neuronal network models, which will help improve the reliability of future models, and render them better building blocks for larger models of the brain