1,040 research outputs found
The Emergence of Up and Down States in Cortical Networks
The cerebral cortex is continuously active in the absence of external stimuli. An example of this spontaneous activity is the voltage transition between an Up and a Down state, observed simultaneously at individual neurons. Since this phenomenon could be of critical importance for working memory and attention, its explanation could reveal some fundamental properties of cortical organization. To identify a possible scenario for the dynamics of Up–Down states, we analyze a reduced stochastic dynamical system that models an interconnected network of excitatory neurons with activity-dependent synaptic depression. The model reveals that when the total synaptic connection strength exceeds a certain threshold, the phase space of the dynamical system contains two attractors, interpreted as Up and Down states. In that case, synaptic noise causes transitions between the states. Moreover, an external stimulation producing a depolarization increases the time spent in the Up state, as observed experimentally. We therefore propose that the existence of Up–Down states is a fundamental and inherent property of a noisy neural ensemble with sufficiently strong synaptic connections
Correction: Persistent Activity in Neural Networks with Dynamic Synapses
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli
Recommended from our members
Continuous Attractor Network Model for Conjunctive Position-by-Velocity Tuning of Grid Cells
The spatial responses of many of the cells recorded in layer II of rodent medial entorhinal cortex (MEC) show a triangular grid pattern, which appears to provide an accurate population code for animal spatial position. In layer III, V and VI of the rat MEC, grid cells are also selective to head-direction and are modulated by the speed of the animal. Several putative mechanisms of grid-like maps were proposed, including attractor network dynamics, interactions with theta oscillations or single-unit mechanisms such as firing rate adaptation. In this paper, we present a new attractor network model that accounts for the conjunctive position-by-velocity selectivity of grid cells. Our network model is able to perform robust path integration even when the recurrent connections are subject to random perturbations
Processing of Sounds by Population Spikes in a Model of Primary Auditory Cortex
We propose a model of the primary auditory cortex (A1), in which each iso-frequency column is represented by a recurrent neural network with short-term synaptic depression. Such networks can emit Population Spikes, in which most of the neurons fire synchronously for a short time period. Different columns are interconnected in a way that reflects the tonotopic map in A1, and population spikes can propagate along the map from one column to the next, in a temporally precise manner that depends on the specific input presented to the network. The network, therefore, processes incoming sounds by precise sequences of population spikes that are embedded in a continuous asynchronous activity, with both of these response components carrying information about the inputs and interacting with each other. With these basic characteristics, the model can account for a wide range of experimental findings. We reproduce neuronal frequency tuning curves, whose width depends on the strength of the intracortical inhibitory and excitatory connections. Non-simultaneous two-tone stimuli show forward masking depending on their temporal separation, as well as on the duration of the first stimulus. The model also exhibits non-linear suppressive interactions between sub-threshold tones and broad-band noise inputs, similar to the hypersensitive locking suppression recently demonstrated in auditory cortex. We derive several predictions from the model. In particular, we predict that spontaneous activity in primary auditory cortex gates the temporally locked responses of A1 neurons to auditory stimuli. Spontaneous activity could, therefore, be a mechanism for rapid and reversible modulation of cortical processing
An organic nanoparticle transistor behaving as a biological synapse
Molecule-based devices are envisioned to complement silicon devices by
providing new functions or already existing functions at a simpler process
level and at a lower cost by virtue of their self-organization capabilities.
Moreover, they are not bound to von Neuman architecture and this feature may
open the way to other architectural paradigms. Neuromorphic electronics is one
of them. Here we demonstrate a device made of molecules and nanoparticles, a
nanoparticle organic memory filed-effect transistor (NOMFET), which exhibits
the main behavior of a biological spiking synapse. Facilitating and depressing
synaptic behaviors can be reproduced by the NOMFET and can be programmed. The
synaptic plasticity for real time computing is evidenced and described by a
simple model. These results open the way to rate coding utilization of the
NOMFET in dynamical neuromorphic computing circuits.Comment: To be publsihed in Adv. Func. Mater. Revised version. One pdf file
including main paper and supplementary informatio
Event-driven simulations of a plastic, spiking neural network
We consider a fully-connected network of leaky integrate-and-fire neurons
with spike-timing-dependent plasticity. The plasticity is controlled by a
parameter representing the expected weight of a synapse between neurons that
are firing randomly with the same mean frequency. For low values of the
plasticity parameter, the activities of the system are dominated by noise,
while large values of the plasticity parameter lead to self-sustaining activity
in the network. We perform event-driven simulations on finite-size networks
with up to 128 neurons to find the stationary synaptic weight conformations for
different values of the plasticity parameter. In both the low and high activity
regimes, the synaptic weights are narrowly distributed around the plasticity
parameter value consistent with the predictions of mean-field theory. However,
the distribution broadens in the transition region between the two regimes,
representing emergent network structures. Using a pseudophysical approach for
visualization, we show that the emergent structures are of "path" or "hub"
type, observed at different values of the plasticity parameter in the
transition region.Comment: 9 pages, 6 figure
A learning rule for place fields in a cortical model: theta phase precession as a network effect
We show that a model of the hippocampus introduced recently by Scarpetta,
Zhaoping & Hertz ([2002] Neural Computation 14(10):2371-96), explains the theta
phase precession phenomena. In our model, the theta phase precession comes out
as a consequence of the associative-memory-like network dynamics, i.e. the
network's ability to imprint and recall oscillatory patterns, coded both by
phases and amplitudes of oscillation. The learning rule used to imprint the
oscillatory states is a natural generalization of that used for static patterns
in the Hopfield model, and is based on the spike time dependent synaptic
plasticity (STDP), experimentally observed. In agreement with experimental
findings, the place cell's activity appears at consistently earlier phases of
subsequent cycles of the ongoing theta rhythm during a pass through the place
field, while the oscillation amplitude of the place cell's firing rate
increases as the animal approaches the center of the place field and decreases
as the animal leaves the center. The total phase precession of the place cell
is lower than 360 degrees, in agreement with experiments. As the animal enters
a receptive field the place cell's activity comes slightly less than 180
degrees after the phase of maximal pyramidal cell population activity, in
agreement with the findings of Skaggs et al (1996). Our model predicts that the
theta phase is much better correlated with location than with time spent in the
receptive field. Finally, in agreement with the recent experimental findings of
Zugaro et al (2005), our model predicts that theta phase precession persists
after transient intra-hippocampal perturbation.Comment: 10 pages, 7 figures, to be published in Hippocampu
- …