1,231 research outputs found
How single neuron properties shape chaotic dynamics and signal transmission in random neural networks
While most models of randomly connected networks assume nodes with simple
dynamics, nodes in realistic highly connected networks, such as neurons in the
brain, exhibit intrinsic dynamics over multiple timescales. We analyze how the
dynamical properties of nodes (such as single neurons) and recurrent
connections interact to shape the effective dynamics in large randomly
connected networks. A novel dynamical mean-field theory for strongly connected
networks of multi-dimensional rate units shows that the power spectrum of the
network activity in the chaotic phase emerges from a nonlinear sharpening of
the frequency response function of single units. For the case of
two-dimensional rate units with strong adaptation, we find that the network
exhibits a state of "resonant chaos", characterized by robust, narrow-band
stochastic oscillations. The coherence of stochastic oscillations is maximal at
the onset of chaos and their correlation time scales with the adaptation
timescale of single units. Surprisingly, the resonance frequency can be
predicted from the properties of isolated units, even in the presence of
heterogeneity in the adaptation parameters. In the presence of these
internally-generated chaotic fluctuations, the transmission of weak,
low-frequency signals is strongly enhanced by adaptation, whereas signal
transmission is not influenced by adaptation in the non-chaotic regime. Our
theoretical framework can be applied to other mechanisms at the level of single
nodes, such as synaptic filtering, refractoriness or spike synchronization.
These results advance our understanding of the interaction between the dynamics
of single units and recurrent connectivity, which is a fundamental step toward
the description of biologically realistic network models in the brain, or, more
generally, networks of other physical or man-made complex dynamical units
Optimal stimulation protocol in a bistable synaptic consolidation model
Consolidation of synaptic changes in response to neural activity is thought
to be fundamental for memory maintenance over a timescale of hours. In
experiments, synaptic consolidation can be induced by repeatedly stimulating
presynaptic neurons. However, the effectiveness of such protocols depends
crucially on the repetition frequency of the stimulations and the mechanisms
that cause this complex dependence are unknown. Here we propose a simple
mathematical model that allows us to systematically study the interaction
between the stimulation protocol and synaptic consolidation. We show the
existence of optimal stimulation protocols for our model and, similarly to LTP
experiments, the repetition frequency of the stimulation plays a crucial role
in achieving consolidation. Our results show that the complex dependence of LTP
on the stimulation frequency emerges naturally from a model which satisfies
only minimal bistability requirements.Comment: 23 pages, 6 figure
Analysis of data systems requirements for global crop production forecasting in the 1985 time frame
Data systems concepts that would be needed to implement the objective of the global crop production forecasting in an orderly transition from experimental to operational status in the 1985 time frame were examined. Information needs of users were converted into data system requirements, and the influence of these requirements on the formulation of a conceptual data system was analyzed. Any potential problem areas in meeting these data system requirements were identified in an iterative process
Nonnormal amplification in random balanced neuronal networks
In dynamical models of cortical networks, the recurrent connectivity can
amplify the input given to the network in two distinct ways. One is induced by
the presence of near-critical eigenvalues in the connectivity matrix W,
producing large but slow activity fluctuations along the corresponding
eigenvectors (dynamical slowing). The other relies on W being nonnormal, which
allows the network activity to make large but fast excursions along specific
directions. Here we investigate the tradeoff between nonnormal amplification
and dynamical slowing in the spontaneous activity of large random neuronal
networks composed of excitatory and inhibitory neurons. We use a Schur
decomposition of W to separate the two amplification mechanisms. Assuming
linear stochastic dynamics, we derive an exact expression for the expected
amount of purely nonnormal amplification. We find that amplification is very
limited if dynamical slowing must be kept weak. We conclude that, to achieve
strong transient amplification with little slowing, the connectivity must be
structured. We show that unidirectional connections between neurons of the same
type together with reciprocal connections between neurons of different types,
allow for amplification already in the fast dynamical regime. Finally, our
results also shed light on the differences between balanced networks in which
inhibition exactly cancels excitation, and those where inhibition dominates.Comment: 13 pages, 7 figure
Event-driven simulations of a plastic, spiking neural network
We consider a fully-connected network of leaky integrate-and-fire neurons
with spike-timing-dependent plasticity. The plasticity is controlled by a
parameter representing the expected weight of a synapse between neurons that
are firing randomly with the same mean frequency. For low values of the
plasticity parameter, the activities of the system are dominated by noise,
while large values of the plasticity parameter lead to self-sustaining activity
in the network. We perform event-driven simulations on finite-size networks
with up to 128 neurons to find the stationary synaptic weight conformations for
different values of the plasticity parameter. In both the low and high activity
regimes, the synaptic weights are narrowly distributed around the plasticity
parameter value consistent with the predictions of mean-field theory. However,
the distribution broadens in the transition region between the two regimes,
representing emergent network structures. Using a pseudophysical approach for
visualization, we show that the emergent structures are of "path" or "hub"
type, observed at different values of the plasticity parameter in the
transition region.Comment: 9 pages, 6 figure
Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity
Classical experiments on spike timing-dependent plasticity (STDP) use a protocol based on pairs of presynaptic and postsynaptic spikes repeated at a given frequency to induce synaptic potentiation or depression. Therefore, standard STDP models have expressed the weight change as a function of pairs of presynaptic and postsynaptic spike. Unfortunately, those paired-based STDP models cannot account for the dependence on the repetition frequency of the pairs of spike. Moreover, those STDP models cannot reproduce recent triplet and quadruplet experiments. Here, we examine a triplet rule (i.e., a rule which considers sets of three spikes, i.e., two pre and one post or one pre and two post) and compare it to classical pair-based STDP learning rules. With such a triplet rule, it is possible to fit experimental data from visual cortical slices as well as from hippocampal cultures. Moreover, when assuming stochastic spike trains, the triplet learning rule can be mapped to a Bienenstock–Cooper–Munro learning rule
Competing synapses with two timescales: a basis for learning and forgetting
Competitive dynamics are thought to occur in many processes of learning
involving synaptic plasticity. Here we show, in a game theory-inspired model of
synaptic interactions, that the competition between synapses in their weak and
strong states gives rise to a natural framework of learning, with the
prediction of memory inherent in a timescale for `forgetting' a learned signal.
Among our main results is the prediction that memory is optimized if the weak
synapses are really weak, and the strong synapses are really strong. Our work
admits of many extensions and possible experiments to test its validity, and in
particular might complement an existing model of reaching, which has strong
experimental support.Comment: 7 pages, 3 figures, to appear in Europhysics Letter
Adherent carbon film deposition by cathodic arc with implantation
A method of improving the adhesion of carbon thin films deposited using a cathodic vacuum arc by the use of implantation at energies up to 20 keV is described. A detailed analysis of carbon films deposited onto silicon in this way is carried out using complementary techniques of transmission electron microscopy and x-ray photoelectron spectroscopy (XPS) is presented. This analysis shows that an amorphous mixing layer consisting of carbon and silicon is formed between the grown pure carbon film and the crystalline silicon substrate. In the mixing layer, it is shown that some chemical bonding occurs between carbon and silicon. Damage to the underlying crystalline silicon substrate is observed and believed to be caused by interstitial implanted carbon atoms which XPS shows are not bonded to the silicon. The effectiveness of this technique is confirmed by scratch testing and by analysis with scanning electron microscopy which shows failure of the silicon substrate occurs before delamination of the carbon film
- …