31 research outputs found
Diluted neural networks with adapting and correlated synapses
We consider the dynamics of diluted neural networks with clipped and adapting
synapses. Unlike previous studies, the learning rate is kept constant as the
connectivity tends to infinity: the synapses evolve on a time scale
intermediate between the quenched and annealing limits and all orders of
synaptic correlations must be taken into account. The dynamics is solved by
mean-field theory, the order parameter for synapses being a function. We
describe the effects, in the double dynamics, due to synaptic correlations.Comment: 6 pages, 3 figures. Accepted for publication in PR
Dynamical and Stationary Properties of On-line Learning from Finite Training Sets
The dynamical and stationary properties of on-line learning from finite
training sets are analysed using the cavity method. For large input dimensions,
we derive equations for the macroscopic parameters, namely, the student-teacher
correlation, the student-student autocorrelation and the learning force
uctuation. This enables us to provide analytical solutions to Adaline learning
as a benchmark. Theoretical predictions of training errors in transient and
stationary states are obtained by a Monte Carlo sampling procedure.
Generalization and training errors are found to agree with simulations. The
physical origin of the critical learning rate is presented. Comparison with
batch learning is discussed throughout the paper.Comment: 30 pages, 4 figure
Noise, regularizers, and unrealizable scenarios in online learning from restricted training sets
We study the dynamics of on-line learning in multilayer neural networks where training examples are sampled with repetition and where the number of examples scales with the number of network weights. The analysis is carried out using the dynamical replica method aimed at obtaining a closed set of coupled equations for a set of macroscopic variables from which both training and generalization errors can be calculated. We focus on scenarios whereby training examples are corrupted by additive Gaussian output noise and regularizers are introduced to improve the network performance. The dependence of the dynamics on the noise level, with and without regularizers, is examined, as well as that of the asymptotic values obtained for both training and generalization errors. We also demonstrate the ability of the method to approximate the learning dynamics in structurally unrealizable scenarios. The theoretical results show good agreement with those obtained by computer simulations
Inference of kinetic Ising model on sparse graphs
Based on dynamical cavity method, we propose an approach to the inference of
kinetic Ising model, which asks to reconstruct couplings and external fields
from given time-dependent output of original system. Our approach gives an
exact result on tree graphs and a good approximation on sparse graphs, it can
be seen as an extension of Belief Propagation inference of static Ising model
to kinetic Ising model. While existing mean field methods to the kinetic Ising
inference e.g., na\" ive mean-field, TAP equation and simply mean-field, use
approximations which calculate magnetizations and correlations at time from
statistics of data at time , dynamical cavity method can use statistics of
data at times earlier than to capture more correlations at different time
steps. Extensive numerical experiments show that our inference method is
superior to existing mean-field approaches on diluted networks.Comment: 9 pages, 3 figures, comments are welcom
Structure-preserving desynchronization of minority games
Perfect synchronicity in -player games is a useful theoretical dream, but
communication delays are inevitable and may result in asynchronous
interactions. Some systems such as financial markets are asynchronous by
design, and yet most theoretical models assume perfectly synchronized actions.
We propose a general method to transform standard models of adaptive agents
into asynchronous systems while preserving their global structure under some
conditions. Using the Minority Game as an example, we find that the phase and
fluctuations structure of the standard game subsists even in maximally
asynchronous deterministic case, but that it disappears if too much
stochasticity is added to the temporal structure of interaction. Allowing for
heterogeneous communication speeds and activity patterns gives rise to a new
information ecology that we study in details.Comment: 6 pages, 7 figures. New version removed a section and found a new
phase transitio
The replica symmetric behavior of the analogical neural network
In this paper we continue our investigation of the analogical neural network,
paying interest to its replica symmetric behavior in the absence of external
fields of any type. Bridging the neural network to a bipartite spin-glass, we
introduce and apply a new interpolation scheme to its free energy that
naturally extends the interpolation via cavity fields or stochastic
perturbations to these models. As a result we obtain the free energy of the
system as a sum rule, which, at least at the replica symmetric level, can be
solved exactly. As a next step we study its related self-consistent equations
for the order parameters and their rescaled fluctuations, found to diverge on
the same critical line of the standard Amit-Gutfreund-Sompolinsky theory.Comment: 17 page
Linear stability analysis of retrieval state in associative memory neural networks of spiking neurons
We study associative memory neural networks of the Hodgkin-Huxley type of
spiking neurons in which multiple periodic spatio-temporal patterns of spike
timing are memorized as limit-cycle-type attractors. In encoding the
spatio-temporal patterns, we assume the spike-timing-dependent synaptic
plasticity with the asymmetric time window. Analysis for periodic solution of
retrieval state reveals that if the area of the negative part of the time
window is equivalent to the positive part, then crosstalk among encoded
patterns vanishes. Phase transition due to the loss of the stability of
periodic solution is observed when we assume fast alpha-function for direct
interaction among neurons. In order to evaluate the critical point of this
phase transition, we employ Floquet theory in which the stability problem of
the infinite number of spiking neurons interacting with alpha-function is
reduced into the eigenvalue problem with the finite size of matrix. Numerical
integration of the single-body dynamics yields the explicit value of the
matrix, which enables us to determine the critical point of the phase
transition with a high degree of precision.Comment: Accepted for publication in Phys. Rev.
Partially Annealed Disorder and Collapse of Like-Charged Macroions
Charged systems with partially annealed charge disorder are investigated
using field-theoretic and replica methods. Charge disorder is assumed to be
confined to macroion surfaces surrounded by a cloud of mobile neutralizing
counterions in an aqueous solvent. A general formalism is developed by assuming
that the disorder is partially annealed (with purely annealed and purely
quenched disorder included as special cases), i.e., we assume in general that
the disorder undergoes a slow dynamics relative to fast-relaxing counterions
making it possible thus to study the stationary-state properties of the system
using methods similar to those available in equilibrium statistical mechanics.
By focusing on the specific case of two planar surfaces of equal mean surface
charge and disorder variance, it is shown that partial annealing of the
quenched disorder leads to renormalization of the mean surface charge density
and thus a reduction of the inter-plate repulsion on the mean-field or
weak-coupling level. In the strong-coupling limit, charge disorder induces a
long-range attraction resulting in a continuous disorder-driven collapse
transition for the two surfaces as the disorder variance exceeds a threshold
value. Disorder annealing further enhances the attraction and, in the limit of
low screening, leads to a global attractive instability in the system.Comment: 21 pages, 2 figure
Dynamics of Recall and Association
Introduction The concept of associative memory in neural networks is already discussed elsewhere (see STATISTICAL MECHANICS OF NEURAL NETWORKS). Associative memory networks are usually recurrent, which implies that one cannot simply write down the values of successive neuron states (as with layered networks). The latter must be solved from coupled dynamic equations. Dynamical studies shed light on the pattern recall process and its relation with the choice of the initial state, the properties of the stored patterns, the noise level and the network architecture. In addition, for non-symmetric networks (where the equilibrium statistics are not known) dynamical techniques are in fact the only tools available. Since our interest is usually in large networks and in global recall processes, the common strategy of the theorist is to move away from the microscopic neuronal equations and derive dynamical laws at a macroscopic level of q
Adaptive Fields: Distributed Representations of Classically Conditioned Associations
Present neural models of classical conditioning all suffer from the same shortcoming: local representation of information (therefore, very precise neural prewiring is necessary). As an alternative we develop two neural models of classical conditioning which rely on distributed representations of information. Both models are of the Hopfield type. In the first model the existence of transmission delays is used to store temporal relations. The second model is based on interactions between spatially separated neural fields. Using tools from statistical mechanics we show that behavioural constraints can be met only if the Hebb rule is extended with inter- or intrasynaptic competition. 2 3 1. Introduction Connectionism has redirected the attention of cognitive scientists to learning and to the neural substrate in which cognitive processes are implemented. Conditioning has become an important field in which ideas from neural networks, behavioural science and neurophysiology are combined. ..