6,995 research outputs found
Theory of Interacting Neural Networks
In this contribution we give an overview over recent work on the theory of
interacting neural networks. The model is defined in Section 2. The typical
teacher/student scenario is considered in Section 3. A static teacher network
is presenting training examples for an adaptive student network. In the case of
multilayer networks, the student shows a transition from a symmetric state to
specialisation. Neural networks can also generate a time series. Training on
time series and predicting it are studied in Section 4. When a network is
trained on its own output, it is interacting with itself. Such a scenario has
implications on the theory of prediction algorithms, as discussed in Section 5.
When a system of networks is trained on its minority decisions, it may be
considered as a model for competition in closed markets, see Section 6. In
Section 7 we consider two mutually interacting networks. A novel phenomenon is
observed: synchronisation by mutual learning. In Section 8 it is shown, how
this phenomenon can be applied to cryptography: Generation of a secret key over
a public channel.Comment: Contribution to Networks, ed. by H.G. Schuster and S. Bornholdt, to
be published by Wiley VC
Modeling Financial Time Series with Artificial Neural Networks
Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001
A neural network approach to audio-assisted movie dialogue detection
A novel framework for audio-assisted dialogue detection based on indicator functions and neural networks is investigated. An indicator function defines that an actor is present at a particular time instant. The cross-correlation function of a pair of indicator functions and the magnitude of the corresponding cross-power spectral density are fed as input to neural networks for dialogue detection. Several types of artificial neural networks, including multilayer perceptrons, voted perceptrons, radial basis function networks, support vector machines, and particle swarm optimization-based multilayer perceptrons are tested. Experiments are carried out to validate the feasibility of the aforementioned approach by using ground-truth indicator functions determined by human observers on 6 different movies. A total of 41 dialogue instances and another 20 non-dialogue instances is employed. The average detection accuracy achieved is high, ranging between 84.78%±5.499% and 91.43%±4.239%
Interacting neural networks and cryptography
Two neural networks which are trained on their mutual output bits are
analysed using methods of statistical physics. The exact solution of the
dynamics of the two weight vectors shows a novel phenomenon: The networks
synchronize to a state with identical time dependent weights. Extending the
models to multilayer networks with discrete weights, it is shown how
synchronization by mutual learning can be applied to secret key exchange over a
public channel.Comment: Invited talk for the meeting of the German Physical Societ
Generalizing with perceptrons in case of structured phase- and pattern-spaces
We investigate the influence of different kinds of structure on the learning
behaviour of a perceptron performing a classification task defined by a teacher
rule. The underlying pattern distribution is permitted to have spatial
correlations. The prior distribution for the teacher coupling vectors itself is
assumed to be nonuniform. Thus classification tasks of quite different
difficulty are included. As learning algorithms we discuss Hebbian learning,
Gibbs learning, and Bayesian learning with different priors, using methods from
statistics and the replica formalism. We find that the Hebb rule is quite
sensitive to the structure of the actual learning problem, failing
asymptotically in most cases. Contrarily, the behaviour of the more
sophisticated methods of Gibbs and Bayes learning is influenced by the spatial
correlations only in an intermediate regime of , where
specifies the size of the training set. Concerning the Bayesian case we show,
how enhanced prior knowledge improves the performance.Comment: LaTeX, 32 pages with eps-figs, accepted by J Phys
Neural Networks for Complex Data
Artificial neural networks are simple and efficient machine learning tools.
Defined originally in the traditional setting of simple vector data, neural
network models have evolved to address more and more difficulties of complex
real world problems, ranging from time evolving data to sophisticated data
structures such as graphs and functions. This paper summarizes advances on
those themes from the last decade, with a focus on results obtained by members
of the SAMM team of Universit\'e Paris
- …