8 research outputs found

    Sparse Distributed Memory Using Rank-Order Neural Codes

    Full text link

    Coupled-Oscillator Associative Memory Array Operation for Pattern Recognition

    Get PDF
    Operation of the array of coupled oscillators underlying the associative memory function is demonstrated for various interconnection schemes (cross-connect, star phase keying and star frequency keying) and various physical implementation of oscillators (van der Pol, phase-locked loop, spin torque). The speed of synchronization of oscillators and the evolution of the degree of matching is studied as a function of device parameters. The dependence of errors in association on the number of the memorized patterns and the distance between the test and the memorized pattern is determined for Palm, Furber and Hopfield association algorithms

    Prevalence and recoverability of syntactic parameters in sparse distributed memories

    Get PDF
    We propose a new method, based on sparse distributed memory, for studying dependence relations between syntactic parameters in the Principles and Parameters model of Syntax. By storing data of syntactic structures of world languages in a Kanerva network and checking recoverability of corrupted data from the network, we identify two different effects: an overall underlying relation between the prevalence of parameters across languages and their degree of recoverability, and a finer effect that makes some parameters more easily recoverable beyond what their prevalence would indicate. The latter can be seen as an indication of the existence of dependence relations, through which a given parameter can be determined using the remaining uncorrupted data

    BitBrain and Sparse Binary Coincidence (SBC) memories: Fast, robust learning and inference for neuromorphic architectures

    Get PDF
    We present an innovative working mechanism (the SBC memory) and surrounding infrastructure (BitBrain) based upon a novel synthesis of ideas from sparse coding, computational neuroscience and information theory that enables fast and adaptive learning and accurate, robust inference. The mechanism is designed to be implemented efficiently on current and future neuromorphic devices as well as on more conventional CPU and memory architectures. An example implementation on the SpiNNaker neuromorphic platform has been developed and initial results are presented. The SBC memory stores coincidences between features detected in class examples in a training set, and infers the class of a previously unseen test example by identifying the class with which it shares the highest number of feature coincidences. A number of SBC memories may be combined in a BitBrain to increase the diversity of the contributing feature coincidences. The resulting inference mechanism is shown to have excellent classification performance on benchmarks such as MNIST and EMNIST, achieving classification accuracy with single-pass learning approaching that of state-of-the-art deep networks with much larger tuneable parameter spaces and much higher training costs. It can also be made very robust to noise. BitBrain is designed to be very efficient in training and inference on both conventional and neuromorphic architectures. It provides a unique combination of single-pass, single-shot and continuous supervised learning; following a very simple unsupervised phase. Accurate classification inference that is very robust against imperfect inputs has been demonstrated. These contributions make it uniquely well-suited for edge and IoT applications

    Sparse distributed memory using rank-order neural codes

    No full text
    Abstract—A variant of a sparse distributed memory (SDM) is shown to have the capability of storing and recalling patterns con-taining rank-order information. These are patterns where infor-mation is encoded not only in the subset of neuron outputs that fire, but also in the order in which that subset fires. This is an interesting companion to several recent works in the neuroscience literature, showing that human memories may be stored in terms of neural spike timings. In our model, the ordering is stored in static synaptic weights using a Hebbian single-shot learning algorithm, and can be reliably recovered whenever the associated input is supplied. It is shown that the memory can operate using only unipolar binary connections throughout. The behavior of the memory under noisy input conditions is also investigated. It is shown that the memory is capable of improving the quality of the data that passes through it. That is, under appropriate conditions the output retrieved from the memory is less noisy than the input used to retrieve it. Thus, this memory architecture could be used as a component in a com-plex system with stable noise properties and, we argue, it can be implemented using spiking neurons. Index Terms—Associative memory, neural networks (NNs), rank-order codes, sparse distributed memory (SDM), spiking neurons. I
    corecore