2,542 research outputs found
Scalar Reduction of a Neural Field Model with Spike Frequency Adaptation
We study a deterministic version of a one- and two-dimensional attractor
neural network model of hippocampal activity first studied by Itskov et al
2011. We analyze the dynamics of the system on the ring and torus domain with
an even periodized weight matrix, assum- ing weak and slow spike frequency
adaptation and a weak stationary input current. On these domains, we find
transitions from spatially localized stationary solutions ("bumps") to
(periodically modulated) solutions ("sloshers"), as well as constant and
non-constant velocity traveling bumps depending on the relative strength of
external input current and adaptation. The weak and slow adaptation allows for
a reduction of the system from a distributed partial integro-differential
equation to a system of scalar Volterra integro-differential equations
describing the movement of the centroid of the bump solution. Using this
reduction, we show that on both domains, sloshing solutions arise through an
Andronov-Hopf bifurcation and derive a normal form for the Hopf bifurcation on
the ring. We also show existence and stability of constant velocity solutions
on both domains using Evans functions. In contrast to existing studies, we
assume a general weight matrix of Mexican-hat type in addition to a smooth
firing rate function.Comment: 60 pages, 22 figure
Coarse-grained dynamics of an activity bump in a neural field model
We study a stochastic nonlocal PDE, arising in the context of modelling
spatially distributed neural activity, which is capable of sustaining
stationary and moving spatially-localized ``activity bumps''. This system is
known to undergo a pitchfork bifurcation in bump speed as a parameter (the
strength of adaptation) is changed; yet increasing the noise intensity
effectively slowed the motion of the bump. Here we revisit the system from the
point of view of describing the high-dimensional stochastic dynamics in terms
of the effective dynamics of a single scalar "coarse" variable. We show that
such a reduced description in the form of an effective Langevin equation
characterized by a double-well potential is quantitatively successful. The
effective potential can be extracted using short, appropriately-initialized
bursts of direct simulation. We demonstrate this approach in terms of (a) an
experience-based "intelligent" choice of the coarse observable and (b) an
observable obtained through data-mining direct simulation results, using a
diffusion map approach.Comment: Corrected aknowledgement
The iso-response method
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, āiso-responseā may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments
Synchronization of electrically coupled resonate-and-fire neurons
Electrical coupling between neurons is broadly present across brain areas and
is typically assumed to synchronize network activity. However, intrinsic
properties of the coupled cells can complicate this simple picture. Many cell
types with strong electrical coupling have been shown to exhibit resonant
properties, and the subthreshold fluctuations arising from resonance are
transmitted through electrical synapses in addition to action potentials. Using
the theory of weakly coupled oscillators, we explore the effect of both
subthreshold and spike-mediated coupling on synchrony in small networks of
electrically coupled resonate-and-fire neurons, a hybrid neuron model with
linear subthreshold dynamics and discrete post-spike reset. We calculate the
phase response curve using an extension of the adjoint method that accounts for
the discontinuity in the dynamics. We find that both spikes and resonant
subthreshold fluctuations can jointly promote synchronization. The subthreshold
contribution is strongest when the voltage exhibits a significant post-spike
elevation in voltage, or plateau. Additionally, we show that the geometry of
trajectories approaching the spiking threshold causes a "reset-induced shear"
effect that can oppose synchrony in the presence of network asymmetry, despite
having no effect on the phase-locking of symmetrically coupled pairs
How adaptation currents change threshold, gain and variability of neuronal spiking
Many types of neurons exhibit spike rate adaptation, mediated by intrinsic
slow -currents, which effectively inhibit neuronal responses. How
these adaptation currents change the relationship between in-vivo like
fluctuating synaptic input, spike rate output and the spike train statistics,
however, is not well understood. In this computational study we show that an
adaptation current which primarily depends on the subthreshold membrane voltage
changes the neuronal input-output relationship (I-O curve) subtractively,
thereby increasing the response threshold. A spike-dependent adaptation current
alters the I-O curve divisively, thus reducing the response gain. Both types of
adaptation currents naturally increase the mean inter-spike interval (ISI), but
they can affect ISI variability in opposite ways. A subthreshold current always
causes an increase of variability while a spike-triggered current decreases
high variability caused by fluctuation-dominated inputs and increases low
variability when the average input is large. The effects on I-O curves match
those caused by synaptic inhibition in networks with asynchronous irregular
activity, for which we find subtractive and divisive changes caused by external
and recurrent inhibition, respectively. Synaptic inhibition, however, always
increases the ISI variability. We analytically derive expressions for the I-O
curve and ISI variability, which demonstrate the robustness of our results.
Furthermore, we show how the biophysical parameters of slow
-conductances contribute to the two different types of adaptation
currents and find that -activated -currents are
effectively captured by a simple spike-dependent description, while
muscarine-sensitive or -activated -currents show a
dominant subthreshold component.Comment: 20 pages, 8 figures; Journal of Neurophysiology (in press
- ā¦