10,008 research outputs found
Intrinsic adaptation in autonomous recurrent neural networks
A massively recurrent neural network responds on one side to input stimuli
and is autonomously active, on the other side, in the absence of sensory
inputs. Stimuli and information processing depends crucially on the qualia of
the autonomous-state dynamics of the ongoing neural activity. This default
neural activity may be dynamically structured in time and space, showing
regular, synchronized, bursting or chaotic activity patterns.
We study the influence of non-synaptic plasticity on the default dynamical
state of recurrent neural networks. The non-synaptic adaption considered acts
on intrinsic neural parameters, such as the threshold and the gain, and is
driven by the optimization of the information entropy. We observe, in the
presence of the intrinsic adaptation processes, three distinct and globally
attracting dynamical regimes, a regular synchronized, an overall chaotic and an
intermittent bursting regime. The intermittent bursting regime is characterized
by intervals of regular flows, which are quite insensitive to external stimuli,
interseeded by chaotic bursts which respond sensitively to input signals. We
discuss these finding in the context of self-organized information processing
and critical brain dynamics.Comment: 24 pages, 8 figure
Transient dynamics for sequence processing neural networks: effect of degree distributions
We derive a analytic evolution equation for overlap parameters including the
effect of degree distribution on the transient dynamics of sequence processing
neural networks. In the special case of globally coupled networks, the
precisely retrieved critical loading ratio is obtained,
where is the network size. In the presence of random networks, our
theoretical predictions agree quantitatively with the numerical experiments for
delta, binomial, and power-law degree distributions.Comment: 11 pages, 6 figure
Dopaminergic and Non-Dopaminergic Value Systems in Conditioning and Outcome-Specific Revaluation
Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal Values Triggers Option Revaluations) neural model. MOTIVATOR describes cognitiveemotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952, R01-DC007683); National Science Foundation (IIS-97-20333, SBE-0354378); Office of Naval Research (N00014-01-1-0624
PkANN - I. Non-linear matter power spectrum interpolation through artificial neural networks
We investigate the interpolation of power spectra of matter fluctuations
using Artificial Neural Network (PkANN). We present a new approach to confront
small-scale non-linearities in the power spectrum of matter fluctuations. This
ever-present and pernicious uncertainty is often the Achilles' heel in
cosmological studies and must be reduced if we are to see the advent of
precision cosmology in the late-time Universe. We show that an optimally
trained artificial neural network (ANN), when presented with a set of
cosmological parameters (Omega_m h^2, Omega_b h^2, n_s, w_0, sigma_8, m_nu and
redshift z), can provide a worst-case error <=1 per cent (for z<=2) fit to the
non-linear matter power spectrum deduced through N-body simulations, for modes
up to k<=0.7 h/Mpc. Our power spectrum interpolator is accurate over the entire
parameter space. This is a significant improvement over some of the current
matter power spectrum calculators. In this paper, we detail how an accurate
interpolation of the matter power spectrum is achievable with only a sparsely
sampled grid of cosmological parameters. Unlike large-scale N-body simulations
which are computationally expensive and/or infeasible, a well-trained ANN can
be an extremely quick and reliable tool in interpreting cosmological
observations and parameter estimation. This paper is the first in a series. In
this method paper, we generate the non-linear matter power spectra using
HaloFit and use them as mock observations to train the ANN. This work sets the
foundation for Paper II, where a suite of N-body simulations will be used to
compute the non-linear matter power spectra at sub-per cent accuracy, in the
quasi-non-linear regime 0.1 h/Mpc <= k <= 0.9 h/Mpc. A trained ANN based on
this N-body suite will be released for the scientific community.Comment: 12 pages, 9 figures, 2 tables, updated to match version accepted by
MNRA
- …