20,138 research outputs found

    An associative memory for the on-line recognition and prediction of temporal sequences

    Full text link
    This paper presents the design of an associative memory with feedback that is capable of on-line temporal sequence learning. A framework for on-line sequence learning has been proposed, and different sequence learning models have been analysed according to this framework. The network model is an associative memory with a separate store for the sequence context of a symbol. A sparse distributed memory is used to gain scalability. The context store combines the functionality of a neural layer with a shift register. The sensitivity of the machine to the sequence context is controllable, resulting in different characteristic behaviours. The model can store and predict on-line sequences of various types and length. Numerical simulations on the model have been carried out to determine its properties.Comment: Published in IJCNN 2005, Montreal, Canad

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    Enhancing associative memory recall and storage capacity using confocal cavity QED

    Get PDF
    Funding: Y.G. and B.M. acknowledgefunding from the Stanford Q-FARM Graduate Student Fellowship and the NSF Graduate Research Fellowship, respectively. J.K. acknowledges support from the Leverhulme Trust (IAF-2014-025), and S.G. acknowledges funding from the James S. McDonnell and Simons Foundations and an NSF Career Award.We introduce a near-term experimental platform for realizing an associative memory. It can simultaneously store many memories by using spinful bosons coupled to a degenerate multimode optical cavity. The associative memory is realized by a confocal cavity QED neural network, with the modes serving as the synapses, connecting a network of superradiant atomic spin ensembles,which serve as the neurons. Memories are encoded in the connectivity matrix between the spins and can be accessed through the input and output of patterns of light. Each aspect of the scheme is based on recently demonstrated technology using a confocal cavity and Bose-condensed atoms. Our scheme has two conceptually novel elements. First, it introduces a new form of random spin system that interpolates between a ferromagnetic and a spin glass regime as a physical parameter is tuned—the positions of ensembles within the cavity. Second, and more importantly, the spins relax via deterministic steepest-descent dynamics rather than Glauber dynamics. We show that this nonequilibrium quantum-optical scheme has significant advantages for associative memory over Glauber dynamics: These dynamics can enhance the network’s ability to store and recall memories beyond that of the standard Hopfield model. Surprisingly, the cavity QED dynamics can retrieve memories even when the system is in the spin glass phase. Thus, the experimental platform provides a novel physical instantiation of associative memories and spin glasses as well as provides an unusual form of relaxational dynamics that is conducive to memory recall even in regimes where it was thought to be impossible.Publisher PDFPeer reviewe

    High Performance Associative Memories and Structured Weight Dilution

    Get PDF
    Copyright SpringerThe consequences of two techniques for symmetrically diluting the weights of the standard Hopfield architecture associative memory model, trained using a non-Hebbian learning rule, are examined. This paper reports experimental investigations into the effect of dilution on factors such as: pattern stability and attractor performance. It is concluded that these networks maintain a reasonable level of performance at fairly high dilution rates

    An analog feedback associative memory

    Get PDF
    A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed out. These limitations can be relieved by choosing a network containing visible as well as hidden units. An architecture consisting of several hidden layers and a visible layer, connected in a circular fashion, is considered. It is proved that the two-layer case is guaranteed to store any number of given analog vectors provided their number does not exceed 1 + the number of neurons in the hidden layer. A learning algorithm that correctly adjusts the locations of the equilibria and guarantees their asymptotic stability is developed. Simulation results confirm the effectiveness of the approach

    Adiabatic Quantum Optimization for Associative Memory Recall

    Get PDF
    Hopfield networks are a variant of associative memory that recall information stored in the couplings of an Ising model. Stored memories are fixed points for the network dynamics that correspond to energetic minima of the spin state. We formulate the recall of memories stored in a Hopfield network using energy minimization by adiabatic quantum optimization (AQO). Numerical simulations of the quantum dynamics allow us to quantify the AQO recall accuracy with respect to the number of stored memories and the noise in the input key. We also investigate AQO performance with respect to how memories are stored in the Ising model using different learning rules. Our results indicate that AQO performance varies strongly with learning rule due to the changes in the energy landscape. Consequently, learning rules offer indirect methods for investigating change to the computational complexity of the recall task and the computational efficiency of AQO.Comment: 22 pages, 11 figures. Updated for clarity and figures, to appear in Frontiers of Physic
    corecore