Feed-forward multilayer neural networks implementing random input-output mappings develop characteristic correlations between the activity of their hidden nodes which are important for the understanding of the storage and generalization performance of the network. It is shown how these correlations can be calculated from the joint probability distribution of the aligning fields at the hidden units for arbitrary decoder function between hidden layer and output. Explicit results are given for the parity-, and-, and committee-machines with arbitrary number of hidden nodes near saturation. Multilayer neural networks (MLN) are powerful information processing devices. Because of their computational abilities they are the workhorses in practical applications of neural networks and a lot of effort is devoted to a thorough understanding of their functional principles. At the same time their theoretical analysis within the framework of statistical mechanics is much harder than that for the single-layer perceptron. It was realized from the beginning that the properties of the internal representations defined a
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.