1,442 research outputs found
A Morphological Associative Memory Employing A Stored Pattern Independent Kernel Image and Its Hardware Model
An associative memory provides a convenient way for pattern retrieval and restoration, which has an important role for handling data distorted with noise. As an effective associative memory, we paid attention to a morphological associative memory (MAM) proposed by Ritter. The model is superior to ordinary associative memory models in terms of calculation amount, memory capacity, and perfect recall rate. However, in general, the kernel design becomes difficult as the stored pattern increases because the kernel uses a part of each stored pattern. In this paper, we propose a stored pattern independent kernel design method for the MAM and design the MAM employing the proposed kernel design with a standard digital manner in parallel architecture for acceleration. We confirm the validity of the proposed kernel design method by auto- and hetero-association experiments and investigate the efficiency of the hardware acceleration. A high-speed operation (more than 150 times in comparison with software execution) is achieved in the custom hardware. The proposed model works as an intelligent pre-processor for the Brain-Inspired Systems (Brain-IS) working in real world
Neural Networks retrieving Boolean patterns in a sea of Gaussian ones
Restricted Boltzmann Machines are key tools in Machine Learning and are
described by the energy function of bipartite spin-glasses. From a statistical
mechanical perspective, they share the same Gibbs measure of Hopfield networks
for associative memory. In this equivalence, weights in the former play as
patterns in the latter. As Boltzmann machines usually require real weights to
be trained with gradient descent like methods, while Hopfield networks
typically store binary patterns to be able to retrieve, the investigation of a
mixed Hebbian network, equipped with both real (e.g., Gaussian) and discrete
(e.g., Boolean) patterns naturally arises. We prove that, in the challenging
regime of a high storage of real patterns, where retrieval is forbidden, an
extra load of Boolean patterns can still be retrieved, as long as the ratio
among the overall load and the network size does not exceed a critical
threshold, that turns out to be the same of the standard
Amit-Gutfreund-Sompolinsky theory. Assuming replica symmetry, we study the case
of a low load of Boolean patterns combining the stochastic stability and
Hamilton-Jacobi interpolating techniques. The result can be extended to the
high load by a non rigorous but standard replica computation argument.Comment: 16 pages, 1 figur
Free energies of Boltzmann Machines: self-averaging, annealed and replica symmetric approximations in the thermodynamic limit
Restricted Boltzmann machines (RBMs) constitute one of the main models for
machine statistical inference and they are widely employed in Artificial
Intelligence as powerful tools for (deep) learning. However, in contrast with
countless remarkable practical successes, their mathematical formalization has
been largely elusive: from a statistical-mechanics perspective these systems
display the same (random) Gibbs measure of bi-partite spin-glasses, whose
rigorous treatment is notoriously difficult. In this work, beyond providing a
brief review on RBMs from both the learning and the retrieval perspectives, we
aim to contribute to their analytical investigation, by considering two
distinct realizations of their weights (i.e., Boolean and Gaussian) and
studying the properties of their related free energies. More precisely,
focusing on a RBM characterized by digital couplings, we first extend the
Pastur-Shcherbina-Tirozzi method (originally developed for the Hopfield model)
to prove the self-averaging property for the free energy, over its quenched
expectation, in the infinite volume limit, then we explicitly calculate its
simplest approximation, namely its annealed bound. Next, focusing on a RBM
characterized by analogical weights, we extend Guerra's interpolating scheme to
obtain a control of the quenched free-energy under the assumption of replica
symmetry: we get self-consistencies for the order parameters (in full agreement
with the existing Literature) as well as the critical line for ergodicity
breaking that turns out to be the same obtained in AGS theory. As we discuss,
this analogy stems from the slow-noise universality. Finally, glancing beyond
replica symmetry, we analyze the fluctuations of the overlaps for an estimate
of the (slow) noise affecting the retrieval of the signal, and by a stability
analysis we recover the Aizenman-Contucci identities typical of glassy systems.Comment: 21 pages, 1 figur
Vocabulary Acquisition to Long-Term Memory through Word Association Strategy
Word association test technique and memory storage issue has an intrinsic and central relationship from the last decade. Subsequent is the study of the relationship between the two and their effect upon vocabulary learning. The reviewed data consists of word association methodology in relation to mental lexicon and its applications. All these functions supports us for using word association tests upon language skills subjects, using some theoretical materials of chain theory, network theory, concept maps etc. The changed pattern of associations at different timings and the evaluation of the degree of storage of maximum number of lexical items in long term memory using WATs are shown. Keywords: word association theory, lexical networks theory, chain theory, words association in mental lexicon
Spoonerisms: An Analysis of Language Processing in Light of Neurobiology
Spoonerisms are described as the category of speech errors involving jumbled-up words. The author examines language, the brain, and the correlation between spoonerisms and the neural structures involved in language processing
Recommended from our members
Dynamic context discrimination : psychological evidence for the Sandia Cognitive Framework.
Human behavior is a function of an iterative interaction between the stimulus environment and past experience. It is not simply a matter of the current stimulus environment activating the appropriate experience or rule from memory (e.g., if it is dark and I hear a strange noise outside, then I turn on the outside lights and investigate). Rather, it is a dynamic process that takes into account not only things one would generally do in a given situation, but things that have recently become known (e.g., there have recently been coyotes seen in the area and one is known to be rabid), as well as other immediate environmental characteristics (e.g., it is snowing outside, I know my dog is outside, I know the police are already outside, etc.). All of these factors combine to inform me of the most appropriate behavior for the situation. If it were the case that humans had a rule for every possible contingency, the amount of storage that would be required to enable us to fluidly deal with most situations we encounter would rapidly become biologically untenable. We can all deal with contingencies like the one above with fairly little effort, but if it isn't based on rules, what is it based on? The assertion of the Cognitive Systems program at Sandia for the past 5 years is that at the heart of this ability to effectively navigate the world is an ability to discriminate between different contexts (i.e., Dynamic Context Discrimination, or DCD). While this assertion in and of itself might not seem earthshaking, it is compelling that this ability and its components show up in a wide variety of paradigms across different subdisciplines in psychology. We begin by outlining, at a high functional level, the basic ideas of DCD. We then provide evidence from several different literatures and paradigms that support our assertion that DCD is a core aspect of cognitive functioning. Finally, we discuss DCD and the computational model that we have developed as an instantiation of DCD in more detail. Before commencing with our overview of DCD, we should note that DCD is not necessarily a theory in the classic sense. Rather, it is a description of cognitive functioning that seeks to unify highly similar findings across a wide variety of literatures. Further, we believe that such convergence warrants a central place in efforts to computationally emulate human cognition. That is, DCD is a general principle of cognition. It is also important to note that while we are drawing parallels across many literatures, these are functional parallels and are not necessarily structural ones. That is, we are not saying that the same neural pathways are involved in these phenomena. We are only saying that the different neural pathways that are responsible for the appearance of these various phenomena follow the same functional rules - the mechanisms are the same even if the physical parts are distinct. Furthermore, DCD is not a causal mechanism - it is an emergent property of the way the brain is constructed. DCD is the result of neurophysiology (cf. John, 2002, 2003). Finally, it is important to note that we are not proposing a generic learning mechanism such that one biological algorithm can account for all situation interpretation. Rather, we are pointing out that there are strikingly similar empirical results across a wide variety of disciplines that can be understood, in part, by similar cognitive processes. It is entirely possible, even assumed in some cases (i.e., primary language acquisition) that these more generic cognitive processes are complemented and constrained by various limits which may or may not be biological in nature (cf. Bates & Elman, 1996; Elman, in press)
The mechanisms for pattern completion and pattern separation in the hippocampus
The mechanisms for pattern completion and pattern separation are described in the context of a theory of hippocampal function in which the hippocampal CA3 system operates as a single attractor or autoassociation network to enable rapid, one-trial, associations between any spatial location (place in rodents, or spatial view in primates) and an object or reward, and to provide for completion of the whole memory during recall from any part. The factors important in the pattern completion in CA3 together with a large number of independent memories stored in CA3 include a sparse distributed representation which is enhanced by the graded firing rates of CA3 neurons, representations that are independent due to the randomizing effect of the mossy fibers, heterosynaptic long-term depression as well as long-term potentiation in the recurrent collateral synapses, and diluted connectivity to minimize the number of multiple synapses between any pair of CA3 neurons which otherwise distort the basins of attraction. Recall of information from CA3 is implemented by the entorhinal cortex perforant path synapses to CA3 cells, which in acting as a pattern associator allow some pattern generalization. Pattern separation is performed in the dentate granule cells using competitive learning to convert grid-like entorhinal cortex firing to place-like fields. Pattern separation in CA3, which is important for completion of any one of the stored patterns from a fragment, is provided for by the randomizing effect of the mossy fiber synapses to which neurogenesis may contribute, by the large number of dentate granule cells each with a sparse representation, and by the sparse independent representations in CA3. Recall to the neocortex is achieved by a reverse hierarchical series of pattern association networks implemented by the hippocampo-cortical backprojections, each one of which performs some pattern generalization, to retrieve a complete pattern of cortical firing in higher-order cortical areas
- …