8 research outputs found

    Some Comments on the Information Stored in Sparse Distributed Memory

    No full text
    An unknown number T of random data vectors have been stored in a sparse distributed memory with randomly chosen hard locations. A method is given to estimate T. The estimate is unbiased, and the coefficient of variation is roughly inversely proportional to the square root of MU, where M is the number of hard locations in the memory and U the length of data

    Best Probability of Activation and Performance Comparisons for Several Designs of Sparse Distributed Memory

    No full text
    The optimal probability of activation and the corresponding performance is studied for three designs of Sparse Distributed Memory, namely, Kanerva's original design, Jaeckel's selected-coordinates design and Karlsson's modification of Jaeckel's design. We will assume that the hard locations (in Karlsson's case, the masks), the storage addresses and the stored data are randomly chosen, and we will consider different levels of random noise in the reading address

    Some Results on Activation and Scaling of Sparse Distributed Memory

    No full text
    It has been suggested that in certain situations it would make sense to use different activation probabilities for writing and reading in SDM (Sparse Distributed Memory). However, here we model such a situation and find that, at least approximately, it is optimal to use the same probabilities for writing and reading. We also investigate the scaling up of SDM, in connection with some observations made by Sjödin, see \cite{Sjodin-97}. It is shown that the original SDM (here in Jaeckel's version) does not scale up if the reading address is disturbed, but that this can be remedied by using a kind of sparse SDM

    Best Probability of Activation and Performance Comparisons for Several Designs of Sparse Distributed Memory

    No full text
    The optimal probability of activation and the corresponding performance is studied for three designs of Sparse Distributed Memory, namely, Kanerva's original design, Jaeckel's selected-coordinates design and Karlsson's modification of Jaeckel's design. We will assume that the hard locations (in Karlsson's case, the masks), the storage addresses and the stored data are randomly chosen, and we will consider different levels of random noise in the reading address. Keywords: Sparse Distributed Memory, Probability of Activation, Performance Contents 1. Introduction 2 2. General definitions and assumptions 2 3. The error probability and the signal-to-noise ratio 4 4. Determination of the signal-to-noise ratio 5 5. Discussion of the normal approximation of Z 9 6. Discussion of the randomness assumptions for hard locations, storage addresses etc. 10 7. Numerical calculations 12 8. Summary and conclusions 13 References 13 Tables 14 1 Real World Computing Partnership 2 Swedish Institute of C..

    Some Comments on the Information Stored in Sparse Distributed Memory

    No full text
    We consider a sparse distributed memory with randomly chosen hard locations, in which an unknown number T of random data vectors have been stored. A method is given to estimate T from the content of the memory with high accuracy. In fact, our estimate is unbiased, the coefficient of variation being roughly inversely proportional to p MU , where M is the number of hard locations in the memory and U the length of data, so the accuracy can be made arbitrarily high by making the memory big enough. A consequence of this is that the good reading methods in [5] and [6] can be used without any need for the special extra location introduced there. Keywords: Sparse distributed memory, SDM. Contents 1 Introduction 1 2 The stochastic variable Q u 2 3 An estimate of T 3 4 Karlsson's design 3 1 Real World Computing Partnership 2 Swedish Institute of Computer Science 1 Introduction We consider a Sparse Distributed Memory, either of Kanerva's original design, see Kanerva [2], or of Jaeckel..

    Random indexing of text samples for latent semantic analysis

    No full text
    VD, the result is not nearly as good: only 36% correct. The authors conclude that the reorganization of information by SVD somehow corresponds to human psychology. We have studied high-dimensional random distributed representations, as models of brainlike representation of information (Kanerva, 1994# Kanerva & Sjodin, 1999). In this poster we report on the use of such a representation to reduce the dimensionality of the original words-by-contexts matrix. The method can be explained by looking at the 60,000 \Theta 30,000 matrix of frequencies above. Assume that each text sample is represented by a 30,000-bit vector with a single 1 marking the place of the sample in a list of all samples, and call it the sample's index vector (i.e., the nth bit of the index vector for the nth text sample is 1---the representation is unitary or local) . Then the words-by-contexts matrix of frequencies can be gotten by the following procedure: every time that the word w occurs in the nth text sample, th

    Computing with large random patterns

    No full text
    We describe a style of computing that differs from traditional numeric and symbolic computing and is suited for modeling neural networks. We focus on one aspect of ``neurocomputing,'' namely, computing with large random patterns, or high-dimensional random vectors, and ask what kind of computing they perform and whether they can help us understand how the brain processes information and how the mind works. Rapidly developing hardware technology will soon be able to produce the massive circuits that this style of computing requires. This chapter develops a theory on which the computing could be based
    corecore