11,755 research outputs found

    Maximum Likelihood Associative Memories

    Full text link
    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amount of memory required to store the same data. Finally, we bound the computational complexity for message retrieval. We then compare these bounds with two existing associative memory architectures: the celebrated Hopfield neural networks and a neural network architecture introduced more recently by Gripon and Berrou

    An associative memory for the on-line recognition and prediction of temporal sequences

    Full text link
    This paper presents the design of an associative memory with feedback that is capable of on-line temporal sequence learning. A framework for on-line sequence learning has been proposed, and different sequence learning models have been analysed according to this framework. The network model is an associative memory with a separate store for the sequence context of a symbol. A sparse distributed memory is used to gain scalability. The context store combines the functionality of a neural layer with a shift register. The sensitivity of the machine to the sequence context is controllable, resulting in different characteristic behaviours. The model can store and predict on-line sequences of various types and length. Numerical simulations on the model have been carried out to determine its properties.Comment: Published in IJCNN 2005, Montreal, Canad

    Sparse neural networks with large learning diversity

    Full text link
    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory

    You can go your own way: effectiveness of participant-driven versus experimenter-driven processing strategies in memory training and transfer

    Get PDF
    Cognitive training programs that instruct specific strategies frequently show limited transfer. Open-ended approaches can achieve greater transfer, but may fail to benefit many older adults due to age deficits in self-initiated processing. We examined whether a compromise that encourages effort at encoding without an experimenter-prescribed strategy might yield better results. Older adults completed memory training under conditions that either (1) mandated a specific strategy to increase deep, associative encoding, (2) attempted to suppress such encoding by mandating rote rehearsal, or (3) encouraged time and effort toward encoding but allowed for strategy choice. The experimenter-enforced associative encoding strategy succeeded in creating integrated representations of studied items, but training-task progress was related to pre-existing ability. Independent of condition assignment, self-reported deep encoding was associated with positive training and transfer effects, suggesting that the most beneficial outcomes occur when environmental support guiding effort is provided but participants generate their own strategies

    Associative memory on a small-world neural network

    Full text link
    We study a model of associative memory based on a neural network with small-world structure. The efficacy of the network to retrieve one of the stored patterns exhibits a phase transition at a finite value of the disorder. The more ordered networks are unable to recover the patterns, and are always attracted to mixture states. Besides, for a range of the number of stored patterns, the efficacy has a maximum at an intermediate value of the disorder. We also give a statistical characterization of the attractors for all values of the disorder of the network.Comment: 5 pages, 4 figures (eps

    Analysis of Bidirectional Associative Memory using SCSNA and Statistical Neurodynamics

    Full text link
    Bidirectional associative memory (BAM) is a kind of an artificial neural network used to memorize and retrieve heterogeneous pattern pairs. Many efforts have been made to improve BAM from the the viewpoint of computer application, and few theoretical studies have been done. We investigated the theoretical characteristics of BAM using a framework of statistical-mechanical analysis. To investigate the equilibrium state of BAM, we applied self-consistent signal to noise analysis (SCSNA) and obtained a macroscopic parameter equations and relative capacity. Moreover, to investigate not only the equilibrium state but also the retrieval process of reaching the equilibrium state, we applied statistical neurodynamics to the update rule of BAM and obtained evolution equations for the macroscopic parameters. These evolution equations are consistent with the results of SCSNA in the equilibrium state.Comment: 13 pages, 4 figure

    Quantum Pattern Retrieval by Qubit Networks with Hebb Interactions

    Get PDF
    Qubit networks with long-range interactions inspired by the Hebb rule can be used as quantum associative memories. Starting from a uniform superposition, the unitary evolution generated by these interactions drives the network through a quantum phase transition at a critical computation time, after which ferromagnetic order guarantees that a measurement retrieves the stored memory. The maximum memory capacity p of these qubit networks is reached at a memory density p/n=1.Comment: To appear in Physical Review Letter
    • …
    corecore