2,455 research outputs found

    Theory of Interacting Neural Networks

    Full text link
    In this contribution we give an overview over recent work on the theory of interacting neural networks. The model is defined in Section 2. The typical teacher/student scenario is considered in Section 3. A static teacher network is presenting training examples for an adaptive student network. In the case of multilayer networks, the student shows a transition from a symmetric state to specialisation. Neural networks can also generate a time series. Training on time series and predicting it are studied in Section 4. When a network is trained on its own output, it is interacting with itself. Such a scenario has implications on the theory of prediction algorithms, as discussed in Section 5. When a system of networks is trained on its minority decisions, it may be considered as a model for competition in closed markets, see Section 6. In Section 7 we consider two mutually interacting networks. A novel phenomenon is observed: synchronisation by mutual learning. In Section 8 it is shown, how this phenomenon can be applied to cryptography: Generation of a secret key over a public channel.Comment: Contribution to Networks, ed. by H.G. Schuster and S. Bornholdt, to be published by Wiley VC

    Playing Billiard in Version Space

    Full text link
    A ray-tracing method inspired by ergodic billiards is used to estimate the theoretically best decision rule for a set of linear separable examples. While the Bayes-optimum requires a majority decision over all Perceptrons separating the example set, the problem considered here corresponds to finding the single Perceptron with best average generalization probability. For randomly distributed examples the billiard estimate agrees with known analytic results. In real-life classification problems the generalization error is consistently reduced compared to the maximal stability Perceptron.Comment: uuencoded, gzipped PostScript file, 127576 bytes To recover 1) save file as bayes.uue. Then 2) uudecode bayes.uue and 3) gunzip bayes.ps.g

    Generation of unpredictable time series by a Neural Network

    Full text link
    A perceptron that learns the opposite of its own output is used to generate a time series. We analyse properties of the weight vector and the generated sequence, like the cycle length and the probability distribution of generated sequences. A remarkable suppression of the autocorrelation function is explained, and connections to the Bernasconi model are discussed. If a continuous transfer function is used, the system displays chaotic and intermittent behaviour, with the product of the learning rate and amplification as a control parameter.Comment: 11 pages, 14 figures; slightly expanded and clarified, mistakes corrected; accepted for publication in PR

    Phase Transitions of Neural Networks

    Full text link
    The cooperative behaviour of interacting neurons and synapses is studied using models and methods from statistical physics. The competition between training error and entropy may lead to discontinuous properties of the neural network. This is demonstrated for a few examples: Perceptron, associative memory, learning from examples, generalization, multilayer networks, structure recognition, Bayesian estimate, on-line training, noise estimation and time series generation.Comment: Plenary talk for MINERVA workshop on mesoscopics, fractals and neural networks, Eilat, March 1997 Postscript Fil
    corecore