495 research outputs found

    Generating functional analysis of Minority Games with real market histories

    Full text link
    It is shown how the generating functional method of De Dominicis can be used to solve the dynamics of the original version of the minority game (MG), in which agents observe real as opposed to fake market histories. Here one again finds exact closed equations for correlation and response functions, but now these are defined in terms of two connected effective non-Markovian stochastic processes: a single effective agent equation similar to that of the `fake' history models, and a second effective equation for the overall market bid itself (the latter is absent in `fake' history models). The result is an exact theory, from which one can calculate from first principles both the persistent observables in the MG and the distribution of history frequencies.Comment: 39 pages, 5 postscript figures, iop styl

    Adaptive Trade-offs in the use of Social and Personal Information

    Get PDF
    In this chapter we review the redictions arising from theoretical models and outline the current empirical support for several social learning strategies, focusing largely on our own experimental studies and other recent work (Laland 2004; Kendal et al. 2005; Galef 2006). We draw attention to adaptive trade-offs in the use of social and personal information. Laland (2004) distinguished between two classes of social learning strategy, “when” strategies, which dictate the circumstances under which individuals copy others, and “who” strategies which specify from whom individuals learn. We address each in turn

    The Cavity Approach to Parallel Dynamics of Ising Spins on a Graph

    Full text link
    We use the cavity method to study parallel dynamics of disordered Ising models on a graph. In particular, we derive a set of recursive equations in single site probabilities of paths propagating along the edges of the graph. These equations are analogous to the cavity equations for equilibrium models and are exact on a tree. On graphs with exclusively directed edges we find an exact expression for the stationary distribution of the spins. We present the phase diagrams for an Ising model on an asymmetric Bethe lattice and for a neural network with Hebbian interactions on an asymmetric scale-free graph. For graphs with a nonzero fraction of symmetric edges the equations can be solved for a finite number of time steps. Theoretical predictions are confirmed by simulation results. Using a heuristic method, the cavity equations are extended to a set of equations that determine the marginals of the stationary distribution of Ising models on graphs with a nonzero fraction of symmetric edges. The results of this method are discussed and compared with simulations

    On-Line Learning with Restricted Training Sets: An Exactly Solvable Case

    Full text link
    We solve the dynamics of on-line Hebbian learning in large perceptrons exactly, for the regime where the size of the training set scales linearly with the number of inputs. We consider both noiseless and noisy teachers. Our calculation cannot be extended to non-Hebbian rules, but the solution provides a convenient and welcome benchmark with which to test more general and advanced theories for solving the dynamics of learning with restricted training sets.Comment: 19 pages, eps figures included, uses epsfig macr

    The Relativistic Hopfield network: rigorous results

    Full text link
    The relativistic Hopfield model constitutes a generalization of the standard Hopfield model that is derived by the formal analogy between the statistical-mechanic framework embedding neural networks and the Lagrangian mechanics describing a fictitious single-particle motion in the space of the tuneable parameters of the network itself. In this analogy the cost-function of the Hopfield model plays as the standard kinetic-energy term and its related Mattis overlap (naturally bounded by one) plays as the velocity. The Hamiltonian of the relativisitc model, once Taylor-expanded, results in a P-spin series with alternate signs: the attractive contributions enhance the information-storage capabilities of the network, while the repulsive contributions allow for an easier unlearning of spurious states, conferring overall more robustness to the system as a whole. Here we do not deepen the information processing skills of this generalized Hopfield network, rather we focus on its statistical mechanical foundation. In particular, relying on Guerra's interpolation techniques, we prove the existence of the infinite volume limit for the model free-energy and we give its explicit expression in terms of the Mattis overlaps. By extremizing the free energy over the latter we get the generalized self-consistent equations for these overlaps, as well as a picture of criticality that is further corroborated by a fluctuation analysis. These findings are in full agreement with the available previous results.Comment: 11 pages, 1 figur

    Slowly evolving geometry in recurrent neural networks I: extreme dilution regime

    Full text link
    We study extremely diluted spin models of neural networks in which the connectivity evolves in time, although adiabatically slowly compared to the neurons, according to stochastic equations which on average aim to reduce frustration. The (fast) neurons and (slow) connectivity variables equilibrate separately, but at different temperatures. Our model is exactly solvable in equilibrium. We obtain phase diagrams upon making the condensed ansatz (i.e. recall of one pattern). These show that, as the connectivity temperature is lowered, the volume of the retrieval phase diverges and the fraction of mis-aligned spins is reduced. Still one always retains a region in the retrieval phase where recall states other than the one corresponding to the `condensed' pattern are locally stable, so the associative memory character of our model is preserved.Comment: 18 pages, 6 figure
    • …
    corecore