30 research outputs found

    Diluted neural networks with adapting and correlated synapses

    Full text link
    We consider the dynamics of diluted neural networks with clipped and adapting synapses. Unlike previous studies, the learning rate is kept constant as the connectivity tends to infinity: the synapses evolve on a time scale intermediate between the quenched and annealing limits and all orders of synaptic correlations must be taken into account. The dynamics is solved by mean-field theory, the order parameter for synapses being a function. We describe the effects, in the double dynamics, due to synaptic correlations.Comment: 6 pages, 3 figures. Accepted for publication in PR

    Dynamical and Stationary Properties of On-line Learning from Finite Training Sets

    Full text link
    The dynamical and stationary properties of on-line learning from finite training sets are analysed using the cavity method. For large input dimensions, we derive equations for the macroscopic parameters, namely, the student-teacher correlation, the student-student autocorrelation and the learning force uctuation. This enables us to provide analytical solutions to Adaline learning as a benchmark. Theoretical predictions of training errors in transient and stationary states are obtained by a Monte Carlo sampling procedure. Generalization and training errors are found to agree with simulations. The physical origin of the critical learning rate is presented. Comparison with batch learning is discussed throughout the paper.Comment: 30 pages, 4 figure

    Noise, regularizers, and unrealizable scenarios in online learning from restricted training sets

    Get PDF
    We study the dynamics of on-line learning in multilayer neural networks where training examples are sampled with repetition and where the number of examples scales with the number of network weights. The analysis is carried out using the dynamical replica method aimed at obtaining a closed set of coupled equations for a set of macroscopic variables from which both training and generalization errors can be calculated. We focus on scenarios whereby training examples are corrupted by additive Gaussian output noise and regularizers are introduced to improve the network performance. The dependence of the dynamics on the noise level, with and without regularizers, is examined, as well as that of the asymptotic values obtained for both training and generalization errors. We also demonstrate the ability of the method to approximate the learning dynamics in structurally unrealizable scenarios. The theoretical results show good agreement with those obtained by computer simulations

    Structure-preserving desynchronization of minority games

    Get PDF
    Perfect synchronicity in NN-player games is a useful theoretical dream, but communication delays are inevitable and may result in asynchronous interactions. Some systems such as financial markets are asynchronous by design, and yet most theoretical models assume perfectly synchronized actions. We propose a general method to transform standard models of adaptive agents into asynchronous systems while preserving their global structure under some conditions. Using the Minority Game as an example, we find that the phase and fluctuations structure of the standard game subsists even in maximally asynchronous deterministic case, but that it disappears if too much stochasticity is added to the temporal structure of interaction. Allowing for heterogeneous communication speeds and activity patterns gives rise to a new information ecology that we study in details.Comment: 6 pages, 7 figures. New version removed a section and found a new phase transitio

    The replica symmetric behavior of the analogical neural network

    Full text link
    In this paper we continue our investigation of the analogical neural network, paying interest to its replica symmetric behavior in the absence of external fields of any type. Bridging the neural network to a bipartite spin-glass, we introduce and apply a new interpolation scheme to its free energy that naturally extends the interpolation via cavity fields or stochastic perturbations to these models. As a result we obtain the free energy of the system as a sum rule, which, at least at the replica symmetric level, can be solved exactly. As a next step we study its related self-consistent equations for the order parameters and their rescaled fluctuations, found to diverge on the same critical line of the standard Amit-Gutfreund-Sompolinsky theory.Comment: 17 page

    Partially Annealed Disorder and Collapse of Like-Charged Macroions

    Full text link
    Charged systems with partially annealed charge disorder are investigated using field-theoretic and replica methods. Charge disorder is assumed to be confined to macroion surfaces surrounded by a cloud of mobile neutralizing counterions in an aqueous solvent. A general formalism is developed by assuming that the disorder is partially annealed (with purely annealed and purely quenched disorder included as special cases), i.e., we assume in general that the disorder undergoes a slow dynamics relative to fast-relaxing counterions making it possible thus to study the stationary-state properties of the system using methods similar to those available in equilibrium statistical mechanics. By focusing on the specific case of two planar surfaces of equal mean surface charge and disorder variance, it is shown that partial annealing of the quenched disorder leads to renormalization of the mean surface charge density and thus a reduction of the inter-plate repulsion on the mean-field or weak-coupling level. In the strong-coupling limit, charge disorder induces a long-range attraction resulting in a continuous disorder-driven collapse transition for the two surfaces as the disorder variance exceeds a threshold value. Disorder annealing further enhances the attraction and, in the limit of low screening, leads to a global attractive instability in the system.Comment: 21 pages, 2 figure

    Dynamics of Recall and Association

    No full text
    Introduction The concept of associative memory in neural networks is already discussed elsewhere (see STATISTICAL MECHANICS OF NEURAL NETWORKS). Associative memory networks are usually recurrent, which implies that one cannot simply write down the values of successive neuron states (as with layered networks). The latter must be solved from coupled dynamic equations. Dynamical studies shed light on the pattern recall process and its relation with the choice of the initial state, the properties of the stored patterns, the noise level and the network architecture. In addition, for non-symmetric networks (where the equilibrium statistics are not known) dynamical techniques are in fact the only tools available. Since our interest is usually in large networks and in global recall processes, the common strategy of the theorist is to move away from the microscopic neuronal equations and derive dynamical laws at a macroscopic level of q

    Adaptive Fields: Distributed Representations of Classically Conditioned Associations

    No full text
    Present neural models of classical conditioning all suffer from the same shortcoming: local representation of information (therefore, very precise neural prewiring is necessary). As an alternative we develop two neural models of classical conditioning which rely on distributed representations of information. Both models are of the Hopfield type. In the first model the existence of transmission delays is used to store temporal relations. The second model is based on interactions between spatially separated neural fields. Using tools from statistical mechanics we show that behavioural constraints can be met only if the Hebb rule is extended with inter- or intrasynaptic competition. 2 3 1. Introduction Connectionism has redirected the attention of cognitive scientists to learning and to the neural substrate in which cognitive processes are implemented. Conditioning has become an important field in which ideas from neural networks, behavioural science and neurophysiology are combined. ..
    corecore