13 research outputs found
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Fly-The-Bee: A Game Imitating Concept Learning in Bees
AbstractThis article presents a web-based game functionally imitating a part of the cognitive behavior of a living organism. This game is a prototype implementation of an artificial online cognitive architecture based on the usage of distributed data representations and Vector Symbolic Architectures. The game demonstrates the feasibility of creating a lightweight cognitive architecture, which is capable of performing rather complex cognitive tasks. The cognitive functionality is implemented in about 100 lines of code and requires few tens of kilobytes of memory for its operation, which make the concept suitable for implementing in low-end devices such as minirobots and wireless sensors
Hyperdimensional computing for efficient distributed classification with randomized neural networks
In the supervised learning domain, considering the recent prevalence of algorithms with high computational cost, the attention is steering towards simpler, lighter, and less computationally extensive training and inference approaches. In particular, randomized algorithms are currently having a resurgence, given their generalized elementary approach. By using randomized neural networks, we study distributed classification, which can be employed in situations were data cannot be stored at a central location nor shared. We propose a more efficient solution for distributed classification by making use of a lossy compression approach applied when sharing the local classifiers with other agents. This approach originates from the framework of hyperdimensional computing, and is adapted herein. The results of experiments on a collection of datasets demonstrate that the proposed approach has usually higher accuracy than local classifiers and getting close to the benchmark - the centralized classifier. This work can be considered as the first step towards analyzing the variegated horizon of distributed randomized neural networks
Few-shot Federated Learning in Randomized Neural Networks via Hyperdimensional Computing
The recent interest in federated learning has initiated the investigation for efficient models deployable in scenarios with strict communication and computational constraints. Furthermore, the inherent privacy concerns in decentralized and federated learning call for efficient distribution of information in a network of interconnected agents. Therefore, we propose a novel distributed classification solution that is based on shallow randomized networks equipped with a compression mechanism that is used for sharing the local model in the federated context. We make extensive use of hyperdimensional computing both in the local network model and in the compressed communication protocol, which is enabled by the binding and the superposition operations. Accuracy, precision, and stability of our proposed approach are demonstrated on a collection of datasets with several network topologies and for different data partitioning schemes
Imitation of honey bees' concept learning processes using Vector Symbolic Architectures
This article presents a proof-of-concept validation of the use of Vector Symbolic Architectures as central component of an online learning architectures. It is demonstrated that Vector Symbolic Architectures enable the structured combination of features/relations that have been detected by a perceptual circuitry and allow such relations to be applied to novel structures without requiring the massive training needed for classical neural networks that depend on trainable connections. The system is showcased through the functional imitation of concept learning in honey bees. Data from real-world experiments with honey bees (Avarguès-Weber et al.; 2012) are used for benchmarking. It is demonstrated that the proposed pipeline features a similar learning curve and accuracy of generalization to that observed for the living bees. The main claim of this article is that there is a class of simple artificial systems that reproduce the learning behaviors of certain living organisms without requiring the implementation of computationally intensive cognitive architectures. Consequently, it is possible in some cases to implement rather advanced cognitive behavior using simple techniques