85 research outputs found
Theory of Interacting Neural Networks
In this contribution we give an overview over recent work on the theory of
interacting neural networks. The model is defined in Section 2. The typical
teacher/student scenario is considered in Section 3. A static teacher network
is presenting training examples for an adaptive student network. In the case of
multilayer networks, the student shows a transition from a symmetric state to
specialisation. Neural networks can also generate a time series. Training on
time series and predicting it are studied in Section 4. When a network is
trained on its own output, it is interacting with itself. Such a scenario has
implications on the theory of prediction algorithms, as discussed in Section 5.
When a system of networks is trained on its minority decisions, it may be
considered as a model for competition in closed markets, see Section 6. In
Section 7 we consider two mutually interacting networks. A novel phenomenon is
observed: synchronisation by mutual learning. In Section 8 it is shown, how
this phenomenon can be applied to cryptography: Generation of a secret key over
a public channel.Comment: Contribution to Networks, ed. by H.G. Schuster and S. Bornholdt, to
be published by Wiley VC
Interacting neural networks and cryptography
Two neural networks which are trained on their mutual output bits are
analysed using methods of statistical physics. The exact solution of the
dynamics of the two weight vectors shows a novel phenomenon: The networks
synchronize to a state with identical time dependent weights. Extending the
models to multilayer networks with discrete weights, it is shown how
synchronization by mutual learning can be applied to secret key exchange over a
public channel.Comment: Invited talk for the meeting of the German Physical Societ
Pulses of chaos synchronization in coupled map chains with delayed transmission
Pulses of synchronization in chaotic coupled map lattices are discussed in
the context of transmission of information. Synchronization and
desynchronization propagate along the chain with different velocities which are
calculated analytically from the spectrum of convective Lyapunov exponents.
Since the front of synchronization travels slower than the front of
desynchronization, the maximal possible chain length for which information can
be transmitted by modulating the first unit of the chain is bounded.Comment: 4 pages, 6 figures, updated version as published in PR
Phase Transitions of Neural Networks
The cooperative behaviour of interacting neurons and synapses is studied
using models and methods from statistical physics. The competition between
training error and entropy may lead to discontinuous properties of the neural
network. This is demonstrated for a few examples: Perceptron, associative
memory, learning from examples, generalization, multilayer networks, structure
recognition, Bayesian estimate, on-line training, noise estimation and time
series generation.Comment: Plenary talk for MINERVA workshop on mesoscopics, fractals and neural
networks, Eilat, March 1997 Postscript Fil
Chaos Synchronization with Dynamic Filters: Two Way is Better Than One Way
Two chaotic systems which interact by mutually exchanging a signal built from
their delayed internal variables, can synchronize. A third unit may be able to
record and to manipulate the exchanged signal. Can the third unit synchronize
to the common chaotic trajectory, as well? If all parameters of the system are
public, a proof is given that the recording system can synchronize as well.
However, if the two interacting systems use private commutative filters to
generate the exchanged signal, a driven system cannot synchronize. It is shown
that with dynamic private filters the chaotic trajectory even cannot be
calculated. Hence two way (interaction) is more than one way (drive). The
implication of this general result to secret communication with chaos
synchronization is discussed
- …