1,964 research outputs found

    Theory of Interacting Neural Networks

    Full text link
    In this contribution we give an overview over recent work on the theory of interacting neural networks. The model is defined in Section 2. The typical teacher/student scenario is considered in Section 3. A static teacher network is presenting training examples for an adaptive student network. In the case of multilayer networks, the student shows a transition from a symmetric state to specialisation. Neural networks can also generate a time series. Training on time series and predicting it are studied in Section 4. When a network is trained on its own output, it is interacting with itself. Such a scenario has implications on the theory of prediction algorithms, as discussed in Section 5. When a system of networks is trained on its minority decisions, it may be considered as a model for competition in closed markets, see Section 6. In Section 7 we consider two mutually interacting networks. A novel phenomenon is observed: synchronisation by mutual learning. In Section 8 it is shown, how this phenomenon can be applied to cryptography: Generation of a secret key over a public channel.Comment: Contribution to Networks, ed. by H.G. Schuster and S. Bornholdt, to be published by Wiley VC

    Non-linear adaptive equalization based on a multi-layer perceptron architecture.

    Get PDF

    Cryptography based on neural networks - analytical results

    Full text link
    Mutual learning process between two parity feed-forward networks with discrete and continuous weights is studied analytically, and we find that the number of steps required to achieve full synchronization between the two networks in the case of discrete weights is finite. The synchronization process is shown to be non-self-averaging and the analytical solution is based on random auxiliary variables. The learning time of an attacker that is trying to imitate one of the networks is examined analytically and is found to be much longer than the synchronization time. Analytical results are found to be in agreement with simulations

    Phase Transitions of Neural Networks

    Full text link
    The cooperative behaviour of interacting neurons and synapses is studied using models and methods from statistical physics. The competition between training error and entropy may lead to discontinuous properties of the neural network. This is demonstrated for a few examples: Perceptron, associative memory, learning from examples, generalization, multilayer networks, structure recognition, Bayesian estimate, on-line training, noise estimation and time series generation.Comment: Plenary talk for MINERVA workshop on mesoscopics, fractals and neural networks, Eilat, March 1997 Postscript Fil

    Replica Symmetry Breaking and the Kuhn-Tucker Cavity Method in simple and multilayer Perceptrons

    Full text link
    Within a Kuhn-Tucker cavity method introduced in a former paper, we study optimal stability learning for situations, where in the replica formalism the replica symmetry may be broken, namely (i) the case of a simple perceptron above the critical loading, and (ii) the case of two-layer AND-perceptrons, if one learns with maximal stability. We find that the deviation of our cavity solution from the replica symmetric one in these cases is a clear indication of the necessity of replica symmetry breaking. In any case the cavity solution tends to underestimate the storage capabilities of the networks.Comment: 32 pages, LaTex Source with 9 .eps-files enclosed, accepted by J. Phys I (France
    corecore