17,114 research outputs found

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    High Performance Associative Memories and Structured Weight Dilution

    Get PDF
    Copyright SpringerThe consequences of two techniques for symmetrically diluting the weights of the standard Hopfield architecture associative memory model, trained using a non-Hebbian learning rule, are examined. This paper reports experimental investigations into the effect of dilution on factors such as: pattern stability and attractor performance. It is concluded that these networks maintain a reasonable level of performance at fairly high dilution rates

    High capacity associative memory with bipolar and binary, biased patterns

    Get PDF
    The high capacity associative memory model is interesting due to its significantly higher capacity when compared with the standard Hopfield model. These networks can use either bipolar or binary patterns, which may also be biased. This paper investigates the performance of a high capacity associative memory model trained with biased patterns, using either bipolar or binary representations. Our results indicate that the binary network performs less well under low bias, but better in other situations, compared with the bipolar network.Peer reviewe

    Dense Associative Memory for Pattern Recognition

    Full text link
    A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set.Comment: Accepted for publication at NIPS 201

    Linear and logarithmic capacities in associative neural networks

    Get PDF
    A model of associate memory incorporating global linearity and pointwise nonlinearities in a state space of n-dimensional binary vectors is considered. Attention is focused on the ability to store a prescribed set of state vectors as attractors within the model. Within the framework of such associative nets, a specific strategy for information storage that utilizes the spectrum of a linear operator is considered in some detail. Comparisons are made between this spectral strategy and a prior scheme that utilizes the sum of Kronecker outer products of the prescribed set of state vectors, which are to function nominally as memories. The storage capacity of the spectral strategy is linear in n (the dimension of the state space under consideration), whereas an asymptotic result of n/4 log n holds for the storage capacity of the outer product scheme. Computer-simulated results show that the spectral strategy stores information more efficiently. The preprocessing costs incurred in the two algorithms are estimated, and recursive strategies are developed for their computation
    corecore