1,765 research outputs found

    The general anaesthetic etomidate inhibits the excitability of mouse thalamocortical relay neurons by modulating multiple modes of GABA<sub>A</sub> receptor-mediated inhibition

    Get PDF
    Modulation of thalamocortical (TC) relay neuron function has been implicated in the sedative and hypnotic effects of general anaesthetics. Inhibition of TC neurons is mediated predominantly by a combination of phasic and tonic inhibition, together with a recently described ‘spillover’ mode of inhibition, generated by the dynamic recruitment of extrasynaptic γ-aminobutyric acid (GABA)(A) receptors (GABA(A)Rs). Previous studies demonstrated that the intravenous anaesthetic etomidate enhances tonic and phasic inhibition in TC relay neurons, but it is not known how etomidate may influence spillover inhibition. Moreover, it is unclear how etomidate influences the excitability of TC neurons. Thus, to investigate the relative contribution of synaptic (α1β2γ2) and extrasynaptic (α4β2δ) GABA(A)Rs to the thalamic effects of etomidate, we performed whole-cell recordings from mouse TC neurons lacking synaptic (α1(0/0)) or extrasynaptic (δ(0/0)) GABA(A)Rs. Etomidate (3 μm) significantly inhibited action-potential discharge in a manner that was dependent on facilitation of both synaptic and extrasynaptic GABA(A)Rs, although enhanced tonic inhibition was dominant in this respect. Additionally, phasic inhibition evoked by stimulation of the nucleus reticularis exhibited a spillover component mediated by δ-GABA(A)Rs, which was significantly prolonged in the presence of etomidate. Thus, etomidate greatly enhanced the transient suppression of TC spike trains by evoked inhibitory postsynaptic potentials. Collectively, these results suggest that the deactivation of thalamus observed during etomidate-induced anaesthesia involves potentiation of tonic and phasic inhibition, and implicate amplification of spillover inhibition as a novel mechanism to regulate the gating of sensory information through the thalamus during anaesthetic states

    Homeostatic plasticity and external input shape neural network dynamics

    Full text link
    In vitro and in vivo spiking activity clearly differ. Whereas networks in vitro develop strong bursts separated by periods of very little spiking activity, in vivo cortical networks show continuous activity. This is puzzling considering that both networks presumably share similar single-neuron dynamics and plasticity rules. We propose that the defining difference between in vitro and in vivo dynamics is the strength of external input. In vitro, networks are virtually isolated, whereas in vivo every brain area receives continuous input. We analyze a model of spiking neurons in which the input strength, mediated by spike rate homeostasis, determines the characteristics of the dynamical state. In more detail, our analytical and numerical results on various network topologies show consistently that under increasing input, homeostatic plasticity generates distinct dynamic states, from bursting, to close-to-critical, reverberating and irregular states. This implies that the dynamic state of a neural network is not fixed but can readily adapt to the input strengths. Indeed, our results match experimental spike recordings in vitro and in vivo: the in vitro bursting behavior is consistent with a state generated by very low network input (< 0.1%), whereas in vivo activity suggests that on the order of 1% recorded spikes are input-driven, resulting in reverberating dynamics. Importantly, this predicts that one can abolish the ubiquitous bursts of in vitro preparations, and instead impose dynamics comparable to in vivo activity by exposing the system to weak long-term stimulation, thereby opening new paths to establish an in vivo-like assay in vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.

    A unified view on weakly correlated recurrent networks

    Get PDF
    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The classes differ in the location of additive noise in the rate dynamics, which is on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the presence of conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra

    Adaptation Reduces Variability of the Neuronal Population Code

    Full text link
    Sequences of events in noise-driven excitable systems with slow variables often show serial correlations among their intervals of events. Here, we employ a master equation for general non-renewal processes to calculate the interval and count statistics of superimposed processes governed by a slow adaptation variable. For an ensemble of spike-frequency adapting neurons this results in the regularization of the population activity and an enhanced post-synaptic signal decoding. We confirm our theoretical results in a population of cortical neurons.Comment: 4 pages, 2 figure

    Conditions for wave trains in spiking neural networks

    Full text link
    Spatiotemporal patterns such as traveling waves are frequently observed in recordings of neural activity. The mechanisms underlying the generation of such patterns are largely unknown. Previous studies have investigated the existence and uniqueness of different types of waves or bumps of activity using neural-field models, phenomenological coarse-grained descriptions of neural-network dynamics. But it remains unclear how these insights can be transferred to more biologically realistic networks of spiking neurons, where individual neurons fire irregularly. Here, we employ mean-field theory to reduce a microscopic model of leaky integrate-and-fire (LIF) neurons with distance-dependent connectivity to an effective neural-field model. In contrast to existing phenomenological descriptions, the dynamics in this neural-field model depends on the mean and the variance in the synaptic input, both determining the amplitude and the temporal structure of the resulting effective coupling kernel. For the neural-field model we employ liner stability analysis to derive conditions for the existence of spatial and temporal oscillations and wave trains, that is, temporally and spatially periodic traveling waves. We first prove that wave trains cannot occur in a single homogeneous population of neurons, irrespective of the form of distance dependence of the connection probability. Compatible with the architecture of cortical neural networks, wave trains emerge in two-population networks of excitatory and inhibitory neurons as a combination of delay-induced temporal oscillations and spatial oscillations due to distance-dependent connectivity profiles. Finally, we demonstrate quantitative agreement between predictions of the analytically tractable neural-field model and numerical simulations of both networks of nonlinear rate-based units and networks of LIF neurons.Comment: 36 pages, 8 figures, 4 table

    Fast and deep: energy-efficient neuromorphic learning with first-spike times

    Get PDF
    For a biological agent operating under environmental pressure, energy consumption and reaction times are of critical importance. Similarly, engineered systems also strive for short time-to-solution and low energy-to-solution characteristics. At the level of neuronal implementation, this implies achieving the desired results with as few and as early spikes as possible. In the time-to-first-spike-coding framework, both of these goals are inherently emerging features of learning. Here, we describe a rigorous derivation of learning such first-spike times in networks of leaky integrate-and-fire neurons, relying solely on input and output spike times, and show how it can implement error backpropagation in hierarchical spiking networks. Furthermore, we emulate our framework on the BrainScaleS-2 neuromorphic system and demonstrate its capability of harnessing the chip's speed and energy characteristics. Finally, we examine how our approach generalizes to other neuromorphic platforms by studying how its performance is affected by typical distortive effects induced by neuromorphic substrates.Comment: 20 pages, 8 figure
    • …
    corecore