71,015 research outputs found
Phase response function for oscillators with strong forcing or coupling
Phase response curve (PRC) is an extremely useful tool for studying the
response of oscillatory systems, e.g. neurons, to sparse or weak stimulation.
Here we develop a framework for studying the response to a series of pulses
which are frequent or/and strong so that the standard PRC fails. We show that
in this case, the phase shift caused by each pulse depends on the history of
several previous pulses. We call the corresponding function which measures this
shift the phase response function (PRF). As a result of the introduction of the
PRF, a variety of oscillatory systems with pulse interaction, such as neural
systems, can be reduced to phase systems. The main assumption of the classical
PRC model, i.e. that the effect of the stimulus vanishes before the next one
arrives, is no longer a restriction in our approach. However, as a result of
the phase reduction, the system acquires memory, which is not just a technical
nuisance but an intrinsic property relevant to strong stimulation. We
illustrate the PRF approach by its application to various systems, such as
Morris-Lecar, Hodgkin-Huxley neuron models, and others. We show that the PRF
allows predicting the dynamics of forced and coupled oscillators even when the
PRC fails
The geometry of spontaneous spiking in neuronal networks
The mathematical theory of pattern formation in electrically coupled networks
of excitable neurons forced by small noise is presented in this work. Using the
Freidlin-Wentzell large deviation theory for randomly perturbed dynamical
systems and the elements of the algebraic graph theory, we identify and analyze
the main regimes in the network dynamics in terms of the key control
parameters: excitability, coupling strength, and network topology. The analysis
reveals the geometry of spontaneous dynamics in electrically coupled network.
Specifically, we show that the location of the minima of a certain continuous
function on the surface of the unit n-cube encodes the most likely activity
patterns generated by the network. By studying how the minima of this function
evolve under the variation of the coupling strength, we describe the principal
transformations in the network dynamics. The minimization problem is also used
for the quantitative description of the main dynamical regimes and transitions
between them. In particular, for the weak and strong coupling regimes, we
present asymptotic formulas for the network activity rate as a function of the
coupling strength and the degree of the network. The variational analysis is
complemented by the stability analysis of the synchronous state in the strong
coupling regime. The stability estimates reveal the contribution of the network
connectivity and the properties of the cycle subspace associated with the graph
of the network to its synchronization properties. This work is motivated by the
experimental and modeling studies of the ensemble of neurons in the Locus
Coeruleus, a nucleus in the brainstem involved in the regulation of cognitive
performance and behavior
Shared inputs, entrainment, and desynchrony in elliptic bursters: from slow passage to discontinuous circle maps
What input signals will lead to synchrony vs. desynchrony in a group of
biological oscillators? This question connects with both classical dynamical
systems analyses of entrainment and phase locking and with emerging studies of
stimulation patterns for controlling neural network activity. Here, we focus on
the response of a population of uncoupled, elliptically bursting neurons to a
common pulsatile input. We extend a phase reduction from the literature to
capture inputs of varied strength, leading to a circle map with discontinuities
of various orders. In a combined analytical and numerical approach, we apply
our results to both a normal form model for elliptic bursting and to a
biophysically-based neuron model from the basal ganglia. We find that,
depending on the period and amplitude of inputs, the response can either appear
chaotic (with provably positive Lyaponov exponent for the associated circle
maps), or periodic with a broad range of phase-locked periods. Throughout, we
discuss the critical underlying mechanisms, including slow-passage effects
through Hopf bifurcation, the role and origin of discontinuities, and the
impact of noiseComment: 17 figures, 40 page
CMOS current-mode chaotic neurons
This paper presents two nonlinear CMOS current-mode circuits that implement neuron soma equations for chaotic neural networks, and another circuit to realize programmable current-mode synapse using CMOS-compatible BJT's. They have been fabricated in a double-metal, single-poly 1.6 /spl mu/m CMOS technology and their measured performance reached the expected function and specifications. The neuron soma circuits use a novel, highly accurate CMOS circuit strategy to realize piecewise-linear characteristics in the current-mode domain. Their prototypes obtain reduced area and low voltage power supply (down to 3 V) with clock frequency of 500 kHz. As regard to the synapse circuit, it obtains large linearity and continuous, linear, weight adjustment by exploration of the exponential-law operation of CMOS-BJT's. The full accordance observed between theory and measurements supports the development of future analog VLSI chaotic neural networks to emulate biological systems and advanced computation
Dysfunction of cortical GABAergic neurons leads to sensory hyper-reactivity in a Shank3 mouse model of ASD.
Hyper-reactivity to sensory input is a common and debilitating symptom in individuals with autism spectrum disorders (ASD), but the neural basis underlying sensory abnormality is not completely understood. Here we examined the neural representations of sensory perception in the neocortex of a Shank3B-/- mouse model of ASD. Male and female Shank3B-/- mice were more sensitive to relatively weak tactile stimulation in a vibrissa motion detection task. In vivo population calcium imaging in vibrissa primary somatosensory cortex (vS1) revealed increased spontaneous and stimulus-evoked firing in pyramidal neurons but reduced activity in interneurons. Preferential deletion of Shank3 in vS1 inhibitory interneurons led to pyramidal neuron hyperactivity and increased stimulus sensitivity in the vibrissa motion detection task. These findings provide evidence that cortical GABAergic interneuron dysfunction plays a key role in sensory hyper-reactivity in a Shank3 mouse model of ASD and identify a potential cellular target for exploring therapeutic interventions
Synchronization of electrically coupled resonate-and-fire neurons
Electrical coupling between neurons is broadly present across brain areas and
is typically assumed to synchronize network activity. However, intrinsic
properties of the coupled cells can complicate this simple picture. Many cell
types with strong electrical coupling have been shown to exhibit resonant
properties, and the subthreshold fluctuations arising from resonance are
transmitted through electrical synapses in addition to action potentials. Using
the theory of weakly coupled oscillators, we explore the effect of both
subthreshold and spike-mediated coupling on synchrony in small networks of
electrically coupled resonate-and-fire neurons, a hybrid neuron model with
linear subthreshold dynamics and discrete post-spike reset. We calculate the
phase response curve using an extension of the adjoint method that accounts for
the discontinuity in the dynamics. We find that both spikes and resonant
subthreshold fluctuations can jointly promote synchronization. The subthreshold
contribution is strongest when the voltage exhibits a significant post-spike
elevation in voltage, or plateau. Additionally, we show that the geometry of
trajectories approaching the spiking threshold causes a "reset-induced shear"
effect that can oppose synchrony in the presence of network asymmetry, despite
having no effect on the phase-locking of symmetrically coupled pairs
The Computational Cost of Asynchronous Neural Communication
Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays.
- Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA\u2719] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization.
To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS\u2790] and following [Hitron and Parter, ESA\u2719], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree ? of the original network.
- Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time
A simple mechanism for higher-order correlations in integrate-and-fire neurons
The collective dynamics of neural populations are often characterized in
terms of correlations in the spike activity of different neurons. Open
questions surround the basic nature of these correlations. In particular, what
leads to higher-order correlations -- correlations in the population activity
that extend beyond those expected from cell pairs? Here, we examine this
question for a simple, but ubiquitous, circuit feature: common fluctuating
input arriving to spiking neurons of integrate-and-fire type. We show that
leads to strong higher-order correlations, as for earlier work with discrete
threshold crossing models. Moreover, we find that the same is true for another
widely used, doubly-stochastic model of neural spiking, the linear-nonlinear
cascade. We explain the surprisingly strong connection between the collective
dynamics produced by these models, and conclude that higher-order correlations
are both broadly expected and possible to capture with surprising accuracy by
simplified (and tractable) descriptions of neural spiking
- …