1,442 research outputs found

    A Morphological Associative Memory Employing A Stored Pattern Independent Kernel Image and Its Hardware Model

    Get PDF
    An associative memory provides a convenient way for pattern retrieval and restoration, which has an important role for handling data distorted with noise. As an effective associative memory, we paid attention to a morphological associative memory (MAM) proposed by Ritter. The model is superior to ordinary associative memory models in terms of calculation amount, memory capacity, and perfect recall rate. However, in general, the kernel design becomes difficult as the stored pattern increases because the kernel uses a part of each stored pattern. In this paper, we propose a stored pattern independent kernel design method for the MAM and design the MAM employing the proposed kernel design with a standard digital manner in parallel architecture for acceleration. We confirm the validity of the proposed kernel design method by auto- and hetero-association experiments and investigate the efficiency of the hardware acceleration. A high-speed operation (more than 150 times in comparison with software execution) is achieved in the custom hardware. The proposed model works as an intelligent pre-processor for the Brain-Inspired Systems (Brain-IS) working in real world

    Neural Networks retrieving Boolean patterns in a sea of Gaussian ones

    Full text link
    Restricted Boltzmann Machines are key tools in Machine Learning and are described by the energy function of bipartite spin-glasses. From a statistical mechanical perspective, they share the same Gibbs measure of Hopfield networks for associative memory. In this equivalence, weights in the former play as patterns in the latter. As Boltzmann machines usually require real weights to be trained with gradient descent like methods, while Hopfield networks typically store binary patterns to be able to retrieve, the investigation of a mixed Hebbian network, equipped with both real (e.g., Gaussian) and discrete (e.g., Boolean) patterns naturally arises. We prove that, in the challenging regime of a high storage of real patterns, where retrieval is forbidden, an extra load of Boolean patterns can still be retrieved, as long as the ratio among the overall load and the network size does not exceed a critical threshold, that turns out to be the same of the standard Amit-Gutfreund-Sompolinsky theory. Assuming replica symmetry, we study the case of a low load of Boolean patterns combining the stochastic stability and Hamilton-Jacobi interpolating techniques. The result can be extended to the high load by a non rigorous but standard replica computation argument.Comment: 16 pages, 1 figur

    Free energies of Boltzmann Machines: self-averaging, annealed and replica symmetric approximations in the thermodynamic limit

    Full text link
    Restricted Boltzmann machines (RBMs) constitute one of the main models for machine statistical inference and they are widely employed in Artificial Intelligence as powerful tools for (deep) learning. However, in contrast with countless remarkable practical successes, their mathematical formalization has been largely elusive: from a statistical-mechanics perspective these systems display the same (random) Gibbs measure of bi-partite spin-glasses, whose rigorous treatment is notoriously difficult. In this work, beyond providing a brief review on RBMs from both the learning and the retrieval perspectives, we aim to contribute to their analytical investigation, by considering two distinct realizations of their weights (i.e., Boolean and Gaussian) and studying the properties of their related free energies. More precisely, focusing on a RBM characterized by digital couplings, we first extend the Pastur-Shcherbina-Tirozzi method (originally developed for the Hopfield model) to prove the self-averaging property for the free energy, over its quenched expectation, in the infinite volume limit, then we explicitly calculate its simplest approximation, namely its annealed bound. Next, focusing on a RBM characterized by analogical weights, we extend Guerra's interpolating scheme to obtain a control of the quenched free-energy under the assumption of replica symmetry: we get self-consistencies for the order parameters (in full agreement with the existing Literature) as well as the critical line for ergodicity breaking that turns out to be the same obtained in AGS theory. As we discuss, this analogy stems from the slow-noise universality. Finally, glancing beyond replica symmetry, we analyze the fluctuations of the overlaps for an estimate of the (slow) noise affecting the retrieval of the signal, and by a stability analysis we recover the Aizenman-Contucci identities typical of glassy systems.Comment: 21 pages, 1 figur

    Vocabulary Acquisition to Long-Term Memory through Word Association Strategy

    Get PDF
    Word association test technique and memory storage issue has an intrinsic and central relationship from the last decade. Subsequent is the study of the relationship between the two and their effect upon vocabulary learning. The reviewed data consists of word association methodology in relation to mental lexicon and its applications. All these functions supports us for using word association tests upon language skills subjects, using some theoretical materials of chain theory, network theory, concept maps etc. The changed pattern of associations at different timings and the evaluation of the degree of storage of maximum number of lexical items in long term memory using WATs are shown. Keywords: word association theory, lexical networks theory, chain theory, words association in mental lexicon

    Spoonerisms: An Analysis of Language Processing in Light of Neurobiology

    Get PDF
    Spoonerisms are described as the category of speech errors involving jumbled-up words. The author examines language, the brain, and the correlation between spoonerisms and the neural structures involved in language processing

    The mechanisms for pattern completion and pattern separation in the hippocampus

    Get PDF
    The mechanisms for pattern completion and pattern separation are described in the context of a theory of hippocampal function in which the hippocampal CA3 system operates as a single attractor or autoassociation network to enable rapid, one-trial, associations between any spatial location (place in rodents, or spatial view in primates) and an object or reward, and to provide for completion of the whole memory during recall from any part. The factors important in the pattern completion in CA3 together with a large number of independent memories stored in CA3 include a sparse distributed representation which is enhanced by the graded firing rates of CA3 neurons, representations that are independent due to the randomizing effect of the mossy fibers, heterosynaptic long-term depression as well as long-term potentiation in the recurrent collateral synapses, and diluted connectivity to minimize the number of multiple synapses between any pair of CA3 neurons which otherwise distort the basins of attraction. Recall of information from CA3 is implemented by the entorhinal cortex perforant path synapses to CA3 cells, which in acting as a pattern associator allow some pattern generalization. Pattern separation is performed in the dentate granule cells using competitive learning to convert grid-like entorhinal cortex firing to place-like fields. Pattern separation in CA3, which is important for completion of any one of the stored patterns from a fragment, is provided for by the randomizing effect of the mossy fiber synapses to which neurogenesis may contribute, by the large number of dentate granule cells each with a sparse representation, and by the sparse independent representations in CA3. Recall to the neocortex is achieved by a reverse hierarchical series of pattern association networks implemented by the hippocampo-cortical backprojections, each one of which performs some pattern generalization, to retrieve a complete pattern of cortical firing in higher-order cortical areas
    corecore