44 research outputs found
Chaotic Phase Synchronization in Bursting-neuron Models Driven by a Weak Periodic Force
We investigate the entrainment of a neuron model exhibiting a chaotic
spiking-bursting behavior in response to a weak periodic force. This model
exhibits two types of oscillations with different characteristic time scales,
namely, long and short time scales. Several types of phase synchronization are
observed, such as 1 : 1 phase locking between a single spike and one period of
the force and 1 : l phase locking between the period of slow oscillation
underlying bursts and l periods of the force. Moreover, spiking-bursting
oscillations with chaotic firing patterns can be synchronized with the periodic
force. Such a type of phase synchronization is detected from the position of a
set of points on a unit circle, which is determined by the phase of the
periodic force at each spiking time. We show that this detection method is
effective for a system with multiple time scales. Owing to the existence of
both the short and the long time scales, two characteristic phenomena are found
around the transition point to chaotic phase synchronization. One phenomenon
shows that the average time interval between successive phase slips exhibits a
power-law scaling against the driving force strength and that the scaling
exponent has an unsmooth dependence on the changes in the driving force
strength. The other phenomenon shows that Kuramoto's order parameter before the
transition exhibits stepwise behavior as a function of the driving force
strength, contrary to the smooth transition in a model with a single time
scale
Linear stability analysis of retrieval state in associative memory neural networks of spiking neurons
We study associative memory neural networks of the Hodgkin-Huxley type of
spiking neurons in which multiple periodic spatio-temporal patterns of spike
timing are memorized as limit-cycle-type attractors. In encoding the
spatio-temporal patterns, we assume the spike-timing-dependent synaptic
plasticity with the asymmetric time window. Analysis for periodic solution of
retrieval state reveals that if the area of the negative part of the time
window is equivalent to the positive part, then crosstalk among encoded
patterns vanishes. Phase transition due to the loss of the stability of
periodic solution is observed when we assume fast alpha-function for direct
interaction among neurons. In order to evaluate the critical point of this
phase transition, we employ Floquet theory in which the stability problem of
the infinite number of spiking neurons interacting with alpha-function is
reduced into the eigenvalue problem with the finite size of matrix. Numerical
integration of the single-body dynamics yields the explicit value of the
matrix, which enables us to determine the critical point of the phase
transition with a high degree of precision.Comment: Accepted for publication in Phys. Rev.
The Temporal Signature of Memories: Identification of a General Mechanism for Dynamic Memory Replay in Humans
Reinstatement of dynamic memories requires the replay of neural patterns that unfold over
time in a similar manner as during perception. However, little is known about the mechanisms
that guide such a temporally structured replay in humans, because previous studies
used either unsuitable methods or paradigms to address this question. Here, we overcome
these limitations by developing a new analysis method to detect the replay of temporal patterns
in a paradigm that requires participants to mentally replay short sound or video clips.
We show that memory reinstatement is accompanied by a decrease of low-frequency (8
Hz) power, which carries a temporal phase signature of the replayed stimulus. These replay
effects were evident in the visual as well as in the auditory domain and were localized to
sensory-specific regions. These results suggest low-frequency phase to be a domain-general
mechanism that orchestrates dynamic memory replay in humans
Principal component analysis of ensemble recordings reveals cell assemblies at high temporal resolution
Simultaneous recordings of many single neurons reveals unique insights into network processing spanning the timescale from single spikes to global oscillations. Neurons dynamically self-organize in subgroups of coactivated elements referred to as cell assemblies. Furthermore, these cell assemblies are reactivated, or replayed, preferentially during subsequent rest or sleep episodes, a proposed mechanism for memory trace consolidation. Here we employ Principal Component Analysis to isolate such patterns of neural activity. In addition, a measure is developed to quantify the similarity of instantaneous activity with a template pattern, and we derive theoretical distributions for the null hypothesis of no correlation between spike trains, allowing one to evaluate the statistical significance of instantaneous coactivations. Hence, when applied in an epoch different from the one where the patterns were identified, (e.g. subsequent sleep) this measure allows to identify times and intensities of reactivation. The distribution of this measure provides information on the dynamics of reactivation events: in sleep these occur as transients rather than as a continuous process
Hippocampal state-dependent behavioral reflex to an identical sensory input in rats.
We examined the local field potential of the hippocampus to monitor brain states during a conditional discrimination task, in order to elucidate the relationship between ongoing brain states and a conditioned motor reflex. Five 10-week-old Wistar/ST male rats underwent a serial feature positive conditional discrimination task in eyeblink conditioning using a preceding light stimulus as a conditional cue for reinforced trials. In this task, a 2-s light stimulus signaled that the following 350-ms tone (conditioned stimulus) was reinforced with a co-terminating 100-ms periorbital electrical shock. The interval between the end of conditional cue and the onset of the conditioned stimulus was 4±1 s. The conditioned stimulus was not reinforced when the light was not presented. Animals successfully utilized the light stimulus as a conditional cue to drive differential responses to the identical conditioned stimulus. We found that presentation of the conditional cue elicited hippocampal theta oscillations, which persisted during the interval of conditional cue and the conditioned stimulus. Moreover, expression of the conditioned response to the tone (conditioned stimulus) was correlated with the appearance of theta oscillations immediately before the conditioned stimulus. These data support hippocampal involvement in the network underlying a conditional discrimination task in eyeblink conditioning. They also suggest that the preceding hippocampal activity can determine information processing of the tone stimulus in the cerebellum and its associated circuits
The spike-timing-dependent learning rule to encode spatiotemporal patterns in a network of spiking neurons
We study associative memory neural networks based on the Hodgkin-Huxley type
of spiking neurons. We introduce the spike-timing-dependent learning rule, in
which the time window with the negative part as well as the positive part is
used to describe the biologically plausible synaptic plasticity. The learning
rule is applied to encode a number of periodical spatiotemporal patterns, which
are successfully reproduced in the periodical firing pattern of spiking neurons
in the process of memory retrieval. The global inhibition is incorporated into
the model so as to induce the gamma oscillation. The occurrence of gamma
oscillation turns out to give appropriate spike timings for memory retrieval of
discrete type of spatiotemporal pattern. The theoretical analysis to elucidate
the stationary properties of perfect retrieval state is conducted in the limit
of an infinite number of neurons and shows the good agreement with the result
of numerical simulations. The result of this analysis indicates that the
presence of the negative and positive parts in the form of the time window
contributes to reduce the size of crosstalk term, implying that the time window
with the negative and positive parts is suitable to encode a number of
spatiotemporal patterns. We draw some phase diagrams, in which we find various
types of phase transitions with change of the intensity of global inhibition.Comment: Accepted for publication in Physical Review
The Neural Representation of Prospective Choice during Spatial Planning and Decisions
We are remarkably adept at inferring the consequences of our actions, yet the neuronal mechanisms that allow us to plan a sequence of novel choices remain unclear. We used functional magnetic resonance imaging (fMRI) to investigate how the human brain plans the shortest path to a goal in novel mazes with one (shallow maze) or two (deep maze) choice points. We observed two distinct anterior prefrontal responses to demanding choices at the second choice point: one in rostrodorsal medial prefrontal cortex (rd-mPFC)/superior frontal gyrus (SFG) that was also sensitive to (deactivated by) demanding initial choices and another in lateral frontopolar cortex (lFPC), which was only engaged by demanding choices at the second choice point. Furthermore, we identified hippocampal responses during planning that correlated with subsequent choice accuracy and response time, particularly in mazes affording sequential choices. Psychophysiological interaction (PPI) analyses showed that coupling between the hippocampus and rd-mPFC increases during sequential (deep versus shallow) planning and is higher before correct versus incorrect choices. In short, using a naturalistic spatial planning paradigm, we reveal how the human brain represents sequential choices during planning without extensive training. Our data highlight a network centred on the cortical midline and hippocampus that allows us to make prospective choices while maintaining initial choices during planning in novel environments
Heterosynaptic plasticity in the neocortex
Ongoing learning continuously shapes the distribution of neurons’ synaptic weights in a system with plastic synapses. Plasticity may change the weights of synapses that were active during the induction—homosynaptic changes, but also may change synapses not active during the induction—heterosynaptic changes. Here we will argue, that heterosynaptic and homosynaptic plasticity are complementary processes, and that heterosynaptic plasticity might accompany homosynaptic plasticity induced by typical pairing protocols. Synapses are not uniform in their susceptibility for plastic changes, but have predispositions to undergo potentiation or depression, or not to change. Predisposition is one of the factors determining the direction and magnitude of homo- and heterosynaptic changes. Heterosynaptic changes which take place according to predispositions for plasticity may provide a useful mechanism(s) for homeostasis of neurons’ synaptic weights and extending the lifetime of memory traces during ongoing learning in neuronal networks