Abstract

A new formalism is described for modelling neural networks by means of which a clear physical understanding of the network behaviour can be gained. In essence, the neural net is represented by an equivalent network of matched filters which is then analysed by standard correlation techniques. The procedure is demonstrated on the synchronous Little-Hopfield network. It is shown how the ability of this network to discriminate between stored binary, bipolar codes is optimised if the stored codes are chosen to be orthogonal. However, such a choice will not often be possible and so a new neural network architecture is proposed which enables the same discrimination to be obtained for arbitrary stored codes. The most efficient convergence of the synchronous Little-Hopfield net is obtained when the neurons are connected to themselves with a weight equal to the number of stored codes. The processing gain is presented for this case. The paper goes on to show how this modelling technique can be extended to analyse the behaviour of both hard and soft neural threshold responses and a novel time-dependent threshold response is described

    Similar works