14 research outputs found

    Capacity for patterns and sequences in Kanerva's SDM as compared to other associative memory models

    Get PDF
    The information capacity of Kanerva's Sparse Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and Hopfield-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns

    An empirical investigation of sparse distributed memory using discrete speech recognition

    Get PDF
    Presented here is a step by step analysis of how the basic Sparse Distributed Memory (SDM) model can be modified to enhance its generalization capabilities for classification tasks. Data is taken from speech generated by a single talker. Experiments are used to investigate the theory of associative memories and the question of generalization from specific instances

    Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989

    Get PDF
    Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis

    Investigations into the capabilities of the SDM and combining CMAC with PURR-PUSS.

    Get PDF
    This thesis consists of two sections analysing aspects of associative memories. The first section compares the usefulness, limitations, and similarities of the sparse distributed memory (SDM), the cerebella model articulation controller (CMAC) and the Hopfield network. This analysis leads in the second section to a proposal for combining CMAC with a form of robot learning through exploration, the PURR-PUSS system. It is then demonstrated the combination of the PURR-PUSS and CMAC systems produce a system capable of robot control. There are a number of critical factors in the performance of a neural network as a memory. These include the capacity and the efficiency of the training. Of the three networks considered, the Hopfield network is by far the most common in the literature. In spite of this, this thesis shows that the SDM and CMAC are almost identical and, in fact, have significant advantages over the Hopfield network in terms of capacity. This is particularly evident in the storage of sequences, where the SDM shows a significant improvement over the Hopfield network. The major contribution of this thesis is the analysis and development of the full potential of the SDM for data storage. The first contribution is a correction of an error in the existing analysis of the capacity of the SDM. The corrected figure is verified both theoretically and experimentally. The second contribution is an improvement in capacity resulting from an alternative method of generating the outputs. Finally, the capacity is further improved, by using an iterative approach to information storage previously employed on the Hopfield network. The latter approach helps produce a significant advantage in capacity for SDM. Another contribution of this thesis is the combination of associative memory with the a means of learning through experimentation. The PURR-PUSS system was originally developed as a means to enable a robot to learn through interacting with its environment. It is shown that its strengths and weaknesses complement those of the CMAC and SDM systems. PURR-PUSS and CMAC are combined and the result is a system which is capable of superior control than either system by itself This is demonstrated through an example, in which the combined system learns to control a ball rolling in a tilting maze of unknown dynamics. The system begins by learning through random exploration controlled by the PURR-PUSS system. As the knowledge of the environment increases, the PURR-PUSS system is able to successfully achieve goals, although the quality of the control is poor. However the addition of CMAC which in turn learns from PURR-PUSS's movements produces an improvement in the quality of the control

    Research summary, January 1989 - June 1990

    Get PDF
    The Research Institute for Advanced Computer Science (RIACS) was established at NASA ARC in June of 1983. RIACS is privately operated by the Universities Space Research Association (USRA), a consortium of 62 universities with graduate programs in the aerospace sciences, under a Cooperative Agreement with NASA. RIACS serves as the representative of the USRA universities at ARC. This document reports our activities and accomplishments for the period 1 Jan. 1989 - 30 Jun. 1990. The following topics are covered: learning systems, networked systems, and parallel systems

    An alternative design for a sparse distributed memory

    Get PDF
    A new design for a Sparse Distributed Memory, called the selected-coordinate design, is described. As in the original design, there are a large number of memory locations, each of which may be activated by many different addresses (binary vectors) in a very large address space. Each memory location is defined by specifying ten selected coordinates (bit positions in the address vectors) and a set of corresponding assigned values, consisting of one bit for each selected coordinate. A memory location is activated by an address if, for all ten of the locations's selected coordinates, the corresponding bits in the address vector match the respective assigned value bits, regardless of the other bits in the address vector. Some comparative memory capacity and signal-to-noise ratio estimates for the both the new and original designs are given. A few possible hardware embodiments of the new design are described

    Immunology as a metaphor for computational information processing : fact or fiction?

    Get PDF
    The biological immune system exhibits powerful information processing capabilities, and therefore is of great interest to the computer scientist. A rapidly expanding research area has attempted to model many of the features inherent in the natural immune system in order to solve complex computational problems. This thesis examines the metaphor in detail, in an effort to understand and capitalise on those features of the metaphor which distinguish it from other existing methodologies. Two problem domains are considered — those of scheduling and data-clustering. It is argued that these domains exhibit similar characteristics to the environment in which the biological immune system operates and therefore that they are suitable candidates for application of the metaphor. For each problem domain, two distinct models are developed, incor-porating a variety of immunological principles. The models are tested on a number of artifical benchmark datasets. The success of the models on the problems considered confirms the utility of the metaphor

    Integer Sparse Distributed Memory and Modular Composite Representation

    Get PDF
    Challenging AI applications, such as cognitive architectures, natural language understanding, and visual object recognition share some basic operations including pattern recognition, sequence learning, clustering, and association of related data. Both the representations used and the structure of a system significantly influence which tasks and problems are most readily supported. A memory model and a representation that facilitate these basic tasks would greatly improve the performance of these challenging AI applications.Sparse Distributed Memory (SDM), based on large binary vectors, has several desirable properties: auto-associativity, content addressability, distributed storage, robustness over noisy inputs that would facilitate the implementation of challenging AI applications. Here I introduce two variations on the original SDM, the Extended SDM and the Integer SDM, that significantly improve these desirable properties, as well as a new form of reduced description representation named MCR.Extended SDM, which uses word vectors of larger size than address vectors, enhances its hetero-associativity, improving the storage of sequences of vectors, as well as of other data structures. A novel sequence learning mechanism is introduced, and several experiments demonstrate the capacity and sequence learning capability of this memory.Integer SDM uses modular integer vectors rather than binary vectors, improving the representation capabilities of the memory and its noise robustness. Several experiments show its capacity and noise robustness. Theoretical analyses of its capacity and fidelity are also presented.A reduced description represents a whole hierarchy using a single high-dimensional vector, which can recover individual items and directly be used for complex calculations and procedures, such as making analogies. Furthermore, the hierarchy can be reconstructed from the single vector. Modular Composite Representation (MCR), a new reduced description model for the representation used in challenging AI applications, provides an attractive tradeoff between expressiveness and simplicity of operations. A theoretical analysis of its noise robustness, several experiments, and comparisons with similar models are presented.My implementations of these memories include an object oriented version using a RAM cache, a version for distributed and multi-threading execution, and a GPU version for fast vector processing

    Learning to read aloud: A neural network approach using sparse distributed memory

    Get PDF
    An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested

    Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning

    Get PDF
    Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the architecture of Associative-Projective Neural Networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, Context-Dependent Thinning preserves the same low density (or sparseness) of the bound codevector for varied number of component codevectors. Besides, a bound codevector is not only similar to another one with similar component codevectors (as in other schemes), but it is also similar to the component codevectors themselves. This allows the similarity of structures to be estimated just by the overlap of their codevectors, without retrieval of the component codevectors. This also allows an easy retrieval of the component codevectors. Examples of algorithmic and neural-network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role-filler and predicate-arguments representation schemes, trees, directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional AI, as well as to the localist and microfeature-based connectionist representations
    corecore