Skip to main content
Article thumbnail
Location of Repository

Correlation of internal representations in feed-forward neural networks

By A. Engel


Feed-forward multilayer neural networks implementing random input-output mappings develop characteristic correlations between the activity of their hidden nodes which are important for the understanding of the storage and generalization performance of the network. It is shown how these correlations can be calculated from the joint probability distribution of the aligning fields at the hidden units for arbitrary decoder function between hidden layer and output. Explicit results are given for the parity-, and-, and committee-machines with arbitrary number of hidden nodes near saturation. Multilayer neural networks (MLN) are powerful information processing devices. Because of their computational abilities they are the workhorses in practical applications of neural networks and a lot of effort is devoted to a thorough understanding of their functional principles. At the same time their theoretical analysis within the framework of statistical mechanics is much harder than that for the single-layer perceptron. It was realized from the beginning that the properties of the internal representations defined a

Year: 1996
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.