1,540 research outputs found
How adaptation currents change threshold, gain and variability of neuronal spiking
Many types of neurons exhibit spike rate adaptation, mediated by intrinsic
slow -currents, which effectively inhibit neuronal responses. How
these adaptation currents change the relationship between in-vivo like
fluctuating synaptic input, spike rate output and the spike train statistics,
however, is not well understood. In this computational study we show that an
adaptation current which primarily depends on the subthreshold membrane voltage
changes the neuronal input-output relationship (I-O curve) subtractively,
thereby increasing the response threshold. A spike-dependent adaptation current
alters the I-O curve divisively, thus reducing the response gain. Both types of
adaptation currents naturally increase the mean inter-spike interval (ISI), but
they can affect ISI variability in opposite ways. A subthreshold current always
causes an increase of variability while a spike-triggered current decreases
high variability caused by fluctuation-dominated inputs and increases low
variability when the average input is large. The effects on I-O curves match
those caused by synaptic inhibition in networks with asynchronous irregular
activity, for which we find subtractive and divisive changes caused by external
and recurrent inhibition, respectively. Synaptic inhibition, however, always
increases the ISI variability. We analytically derive expressions for the I-O
curve and ISI variability, which demonstrate the robustness of our results.
Furthermore, we show how the biophysical parameters of slow
-conductances contribute to the two different types of adaptation
currents and find that -activated -currents are
effectively captured by a simple spike-dependent description, while
muscarine-sensitive or -activated -currents show a
dominant subthreshold component.Comment: 20 pages, 8 figures; Journal of Neurophysiology (in press
The Impact Of Spike-Frequency Adaptation On Balanced Network Dynamics
A dynamic balance between strong excitatory and inhibitory neuronal inputs is hypothesized to play a pivotal role in information processing in the brain. While there is evidence of the existence of a balanced operating regime in several cortical areas and idealized neuronal network models, it is important for the theory of balanced networks to be reconciled with more physiological neuronal modeling assumptions. In this work, we examine the impact of spike-frequency adaptation, observed widely across neurons in the brain, on balanced dynamics. We incorporate adaptation into binary and integrate-and-fire neuronal network models, analyzing the theoretical effect of adaptation in the large network limit and performing an extensive numerical investigation of the model adaptation parameter space. Our analysis demonstrates that balance is well preserved for moderate adaptation strength even if the entire network exhibits adaptation. In the common physiological case in which only excitatory neurons undergo adaptation, we show that the balanced operating regime in fact widens relative to the non-adaptive case. We hypothesize that spike-frequency adaptation may have been selected through evolution to robustly facilitate balanced dynamics across diverse cognitive operating states
Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience
This essay is presented with two principal objectives in mind: first, to
document the prevalence of fractals at all levels of the nervous system, giving
credence to the notion of their functional relevance; and second, to draw
attention to the as yet still unresolved issues of the detailed relationships
among power law scaling, self-similarity, and self-organized criticality. As
regards criticality, I will document that it has become a pivotal reference
point in Neurodynamics. Furthermore, I will emphasize the not yet fully
appreciated significance of allometric control processes. For dynamic fractals,
I will assemble reasons for attributing to them the capacity to adapt task
execution to contextual changes across a range of scales. The final Section
consists of general reflections on the implications of the reviewed data, and
identifies what appear to be issues of fundamental importance for future
research in the rapidly evolving topic of this review
Dynamic Control of Network Level Information Processing through Cholinergic Modulation
Acetylcholine (ACh) release is a prominent neurochemical marker of arousal state
within the brain. Changes in ACh are associated with changes in neural activity and
information processing, though its exact role and the mechanisms through which it
acts are unknown. Here I show that the dynamic changes in ACh levels that are
associated with arousal state control informational processing functions of networks
through its effects on the degree of Spike-Frequency Adaptation (SFA), an activity
dependent decrease in excitability, synchronizability, and neuronal resonance displayed
by single cells. Using numerical modeling I develop mechanistic explanations
for how control of these properties shift network activity from a stable high frequency
spiking pattern to a traveling wave of activity. This transition mimics the change
in brain dynamics seen between high ACh states, such as waking and Rapid Eye
Movement (REM) sleep, and low ACh states such as Non-REM (NREM) sleep. A
corresponding, and related, transition in network level memory recall is also occurs
as ACh modulates neuronal SFA. When ACh is at its highest levels (waking) all
memories are stably recalled, as ACh is decreased (REM) in the model weakly encoded
memories destabilize while strong memories remain stable. In levels of ACh
that match Slow Wave Sleep (SWS), no encoded memories are stably recalled. This
results from a competition between SFA and excitatory input strength and provides
a mechanism for neural networks to control the representation of underlying synaptic
information. Finally I show that during the low ACh conditions, oscillatory conditions
allow for external inputs to be properly stored in and recalled from synaptic weights. Taken together this work demonstrates that dynamic neuromodulation is
critical for the regulation of information processing tasks in neural networks. These
results suggest that ACh is capable of switching networks between two distinct information
processing modes. Rate coding of information is facilitated during high
ACh conditions and phase coding of information is facilitated during low ACh conditions.
Finally I propose that ACh levels control whether a network is in one of
three functional states: (High ACh; Active waking) optimized for encoding of new
information or the stable representation of relevant memories, (Mid ACh; resting
state or REM) optimized for encoding connections between currently stored memories
or searching the catalog of stored memories, and (Low ACh; NREM) optimized
for renormalization of synaptic strength and memory consolidation. This work provides
a mechanistic insight into the role of dynamic changes in ACh levels for the
encoding, consolidation, and maintenance of memories within the brain.PHDNeuroscienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147503/1/roachjp_1.pd
Homeostatic plasticity and external input shape neural network dynamics
In vitro and in vivo spiking activity clearly differ. Whereas networks in
vitro develop strong bursts separated by periods of very little spiking
activity, in vivo cortical networks show continuous activity. This is puzzling
considering that both networks presumably share similar single-neuron dynamics
and plasticity rules. We propose that the defining difference between in vitro
and in vivo dynamics is the strength of external input. In vitro, networks are
virtually isolated, whereas in vivo every brain area receives continuous input.
We analyze a model of spiking neurons in which the input strength, mediated by
spike rate homeostasis, determines the characteristics of the dynamical state.
In more detail, our analytical and numerical results on various network
topologies show consistently that under increasing input, homeostatic
plasticity generates distinct dynamic states, from bursting, to
close-to-critical, reverberating and irregular states. This implies that the
dynamic state of a neural network is not fixed but can readily adapt to the
input strengths. Indeed, our results match experimental spike recordings in
vitro and in vivo: the in vitro bursting behavior is consistent with a state
generated by very low network input (< 0.1%), whereas in vivo activity suggests
that on the order of 1% recorded spikes are input-driven, resulting in
reverberating dynamics. Importantly, this predicts that one can abolish the
ubiquitous bursts of in vitro preparations, and instead impose dynamics
comparable to in vivo activity by exposing the system to weak long-term
stimulation, thereby opening new paths to establish an in vivo-like assay in
vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.
How single neuron properties shape chaotic dynamics and signal transmission in random neural networks
While most models of randomly connected networks assume nodes with simple
dynamics, nodes in realistic highly connected networks, such as neurons in the
brain, exhibit intrinsic dynamics over multiple timescales. We analyze how the
dynamical properties of nodes (such as single neurons) and recurrent
connections interact to shape the effective dynamics in large randomly
connected networks. A novel dynamical mean-field theory for strongly connected
networks of multi-dimensional rate units shows that the power spectrum of the
network activity in the chaotic phase emerges from a nonlinear sharpening of
the frequency response function of single units. For the case of
two-dimensional rate units with strong adaptation, we find that the network
exhibits a state of "resonant chaos", characterized by robust, narrow-band
stochastic oscillations. The coherence of stochastic oscillations is maximal at
the onset of chaos and their correlation time scales with the adaptation
timescale of single units. Surprisingly, the resonance frequency can be
predicted from the properties of isolated units, even in the presence of
heterogeneity in the adaptation parameters. In the presence of these
internally-generated chaotic fluctuations, the transmission of weak,
low-frequency signals is strongly enhanced by adaptation, whereas signal
transmission is not influenced by adaptation in the non-chaotic regime. Our
theoretical framework can be applied to other mechanisms at the level of single
nodes, such as synaptic filtering, refractoriness or spike synchronization.
These results advance our understanding of the interaction between the dynamics
of single units and recurrent connectivity, which is a fundamental step toward
the description of biologically realistic network models in the brain, or, more
generally, networks of other physical or man-made complex dynamical units
Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons : comparison and implementation
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.Characterizing the dynamics of biophysically modeled, large neuronal networks usually involves extensive numerical simulations. As an alternative to this expensive procedure we propose efficient models that describe the network activity in terms of a few ordinary differential equations. These systems are simple to solve and allow for convenient investigations of asynchronous, oscillatory or chaotic network states because linear stability analyses and powerful related methods are readily applicable. We build upon two research lines on which substantial efforts have been exerted in the last two decades: (i) the development of single neuron models of reduced complexity that can accurately reproduce a large repertoire of observed neuronal behavior, and (ii) different approaches to approximate the Fokker-Planck equation that represents the collective dynamics of large neuronal networks. We combine these advances and extend recent approximation methods of the latter kind to obtain spike rate models that surprisingly well reproduce the macroscopic dynamics of the underlying neuronal network. At the same time the microscopic properties are retained through the single neuron model parameters. To enable a fast adoption we have released an efficient Python implementation as open source software under a free license
- …