Skip to main content
Article thumbnail
Location of Repository

Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning

By Dmitri A. Rachkovskij and Ernst M. Kussul

Abstract

Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the architecture of Associative-Projective Neural Networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, Context-Dependent Thinning preserves the same low density (or sparseness) of the bound codevector for varied number of component codevectors. Besides, a bound codevector is not only similar to another one with similar component codevectors (as in other schemes), but it is also similar to the component codevectors themselves. This allows the similarity of structures to be estimated just by the overlap of their codevectors, without retrieval of the component codevectors. This also allows an easy retrieval of the component codevectors. Examples of algorithmic and neural-network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role-filler and predicate-arguments representation schemes, trees, directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional AI, as well as to the localist and microfeature-based connectionist representations

Topics: Artificial Intelligence, Neural Nets
Year: 2001
OAI identifier: oai:cogprints.org:1240
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://cogprints.org/1240/1/cd... (external link)
  • http://cogprints.org/1240/5/cd... (external link)
  • http://cogprints.org/1240/ (external link)
  • Suggested articles

    Citations

    1. (1997). A common framework for distributed representation schemes for compositional structure. In
    2. (1988). A distributed connectionist production system.
    3. (1997). A general framework for adaptive processing of data structures.
    4. (1993). Adaptive neural network classifier with multifloat input coding.
    5. (1996). Application of stochastic assembly neural networks in the problem of interesting text selection. In Neural network systems for information processing
    6. (1989). Associative memory in neural networks with the Hebbian learning rule.
    7. (1992). Associative neuron-like structures.
    8. (1991). Associative-Projective Neural Networks: architecture, implementation, applications.
    9. (1990). BoltzCONS: Dynamic symbol structures in a connectionist network.
    10. (1989). Characteristics of sparsely encoded associative memory.
    11. (1995). Connectionist and symbolic representations.
    12. (1982). Connectionist models and their properties.
    13. (1990). Design of a neural-like network architecture for recognition of object shapes in images.
    14. (1990). Development and investigation of multilevel assembly neural networks.
    15. (1993). From simple associations to systematic reasoning: connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. doi
    16. (1991). Hardware and software neurocomputer system for recognition of acoustical signals.
    17. (1998). Holistic higher-level structure-forming algorithms.
    18. (1994). Labeling RAAM.
    19. (1988). Modeling of thinking elements.
    20. (1991). Multilevel assembly neural architecture and processing of sequences.
    21. (1985). Nervous structures with dynamical links. doi
    22. (1987). Neural models of associative memory.
    23. (1999). Neural network system for continuous handwritten words recognition.
    24. (1989). Neural Representation of Conceptual Knowledge. In
    25. (1991). Neurocomputers and intelligent robots. Kiev: Naukova dumka.
    26. (1990). Numerical-analytical method for neural network investigation.
    27. (1990). On audio signals recognition by multilevel neural network.
    28. (1991). On image texture recognition by associative-projective neurocomputer. In
    29. (1993). On information encoding in associative-projective neural networks.
    30. (1990). On numerical-analytical investigation of neural network characteristics.
    31. (1993). On some results and prospects of development of associative-projective neurocomputers.
    32. (1990). Recursive distributed representations.
    33. (1985). Reliability and speed of recall in an associative network.
    34. (1997). Scaling-up RAAMs.
    35. (1987). Spurious memory" in model neural networks.
    36. (1997). supervised neural networks for the classification of structures.
    37. (1990). Tensor product variable binding and the representation of symbolic structures in connectionist systems.
    38. (1991). Texture recognition using neurocomputer. (Preprint 91-8).
    39. (1981). The correlation theory of brain function.
    40. (1992). The problem of training neural network training to recognize word roots.
    41. (1998). The Sparchunk Code: a method to build higher-level structures in a sparsely encoded SDM.

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.