Location of Repository

Dynamics of interacting neural networks

By W. Kinzel, R. Metzler and I. Kanter

Abstract

Complex bit sequences generated by a perceptron that learns the opposite of its own prediction are studied, and the success of a student perceptron trained on this sequence is calculated. A system of interacting perceptrons with a directed flow of information is solved analytically. A symmetry breaking phase transition is found with increasing learning rate. A system of interacting perceptrons can develop a good strategy for the problem of adaptive competition known as the minority game. Simple models of neural networks describe a wide variety of phenomena in neurobiology and information theory. Neural networks are systems of elements interacting by adaptive couplings which are trained by a set of examples. After training they function as content addressable associative memory, as classifiers or as prediction algorithms. Using methods of statistical physics many of these phenomena have been elucidated analytically for infinitely large single neural networks [1]. Many phenomena in biology, social science and computer science may be modelled by

Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.305.6997
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://arxiv.org/pdf/cond-mat/... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.