4,199 research outputs found

    Correlating matched-filter model for analysis and optimisation of neural networks

    Get PDF
    A new formalism is described for modelling neural networks by means of which a clear physical understanding of the network behaviour can be gained. In essence, the neural net is represented by an equivalent network of matched filters which is then analysed by standard correlation techniques. The procedure is demonstrated on the synchronous Little-Hopfield network. It is shown how the ability of this network to discriminate between stored binary, bipolar codes is optimised if the stored codes are chosen to be orthogonal. However, such a choice will not often be possible and so a new neural network architecture is proposed which enables the same discrimination to be obtained for arbitrary stored codes. The most efficient convergence of the synchronous Little-Hopfield net is obtained when the neurons are connected to themselves with a weight equal to the number of stored codes. The processing gain is presented for this case. The paper goes on to show how this modelling technique can be extended to analyse the behaviour of both hard and soft neural threshold responses and a novel time-dependent threshold response is described

    Correlating matched-filter model for analysis and optimisation of neural networks

    Get PDF
    A new formalism is described for modelling neural networks by means of which a clear physical understanding of the network behaviour can be gained. In essence, the neural net is represented by an equivalent network of matched filters which is then analysed by standard correlation techniques. The procedure is demonstrated on the synchronous Little-Hopfield network. It is shown how the ability of this network to discriminate between stored binary, bipolar codes is optimised if the stored codes are chosen to be orthogonal. However, such a choice will not often be possible and so a new neural network architecture is proposed which enables the same discrimination to be obtained for arbitrary stored codes. The most efficient convergence of the synchronous Little-Hopfield net is obtained when the neurons are connected to themselves with a weight equal to the number of stored codes. The processing gain is presented for this case. The paper goes on to show how this modelling technique can be extended to analyse the behaviour of both hard and soft neural threshold responses and a novel time-dependent threshold response is described

    Pattern Recognition Using Associative Memories

    Get PDF
    The human brain is extremely effective at performing pattern recognition, even in the presence of noisy or distorted inputs. Artificial neural networks attempt to imitate the structure of the brain, often with a view to mimicking its success. The binary correlation matrix memory (CMM) is a particular type of neural network that is capable of learning and recalling associations extremely quickly, as well as displaying a high storage capacity and having the ability to generalise from patterns already learned. CMMs have been used as a major component of larger architectures designed to solve a wide range of problems, such as rule chaining, character recognition, or more general pattern recognition. It is clear that the memory requirement of the CMMs will thus have a significant impact on the scalability of such architectures. A domain specific language for binary CMMs is developed, alongside an implementation that uses an efficient storage mechanism which allows memory usage to scale linearly with the number of associations stored. An architecture for rule chaining is then examined in detail, showing that the problem of scalability is indeed settled before identifying and resolving a number of important limitations to its capabilities. Finally an architecture for pattern recognition is investigated, and a memory efficient method to incorporate general invariance into this architecture is presented—this is specifically tested with scale invariance, although the mechanism can be used with other types of invariance such as skew or rotation

    From Bidirectional Associative Memory to a noise-tolerant, robust Protein Processor Associative Memory

    Get PDF
    AbstractProtein Processor Associative Memory (PPAM) is a novel architecture for learning associations incrementally and online and performing fast, reliable, scalable hetero-associative recall. This paper presents a comparison of the PPAM with the Bidirectional Associative Memory (BAM), both with Kosko's original training algorithm and also with the more popular Pseudo-Relaxation Learning Algorithm for BAM (PRLAB). It also compares the PPAM with a more recent associative memory architecture called SOIAM. Results of training for object-avoidance are presented from simulations using player/stage and are verified by actual implementations on the E-Puck mobile robot. Finally, we show how the PPAM is capable of achieving an increase in performance without using the typical weighted-sum arithmetic operations or indeed any arithmetic operations
    • …
    corecore