178 research outputs found
Pattern classification using a linear associative memory
Pattern classification is a very important image processing task. A typical pattern classification algorithm can be broken into two parts; first, the pattern features are extracted and, second, these features are compared with a stored set of reference features until a match is found. In the second part, usually one of the several clustering algorithms or similarity measures is applied. In this paper, a new application of linear associative memory (LAM) to pattern classification problems is introduced. Here, the clustering algorithms or similarity measures are replaced by a LAM matrix multiplication. With a LAM, the reference features need not be separately stored. Since the second part of most classification algorithms is similar, a LAM standardizes the many clustering algorithms and also allows for a standard digital hardware implementation. Computer simulations on regular textures using a feature extraction algorithm achieved a high percentage of successful classification. In addition, this classification is independent of topological transformations
Task-Projected Hyperdimensional Computing for Multi-Task Learning
Brain-inspired Hyperdimensional (HD) computing is an emerging technique for
cognitive tasks in the field of low-power design. As a fast-learning and
energy-efficient computational paradigm, HD computing has shown great success
in many real-world applications. However, an HD model incrementally trained on
multiple tasks suffers from the negative impacts of catastrophic forgetting.
The model forgets the knowledge learned from previous tasks and only focuses on
the current one. To the best of our knowledge, no study has been conducted to
investigate the feasibility of applying multi-task learning to HD computing. In
this paper, we propose Task-Projected Hyperdimensional Computing (TP-HDC) to
make the HD model simultaneously support multiple tasks by exploiting the
redundant dimensionality in the hyperspace. To mitigate the interferences
between different tasks, we project each task into a separate subspace for
learning. Compared with the baseline method, our approach efficiently utilizes
the unused capacity in the hyperspace and shows a 12.8% improvement in averaged
accuracy with negligible memory overhead.Comment: To be published in 16th International Conference on Artificial
Intelligence Applications and Innovation
Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization
Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenario
What is the functional role of adult neurogenesis in the hippocampus?
The dentate gyrus is part of the hippocampal memory system and special in
that it generates new neurons throughout life. Here we discuss the
question of what the functional role of these new neurons might be. Our
hypothesis is that they help the dentate gyrus to avoid the problem of
catastrophic interference when adapting to new environments. We assume
that old neurons are rather stable and preserve an optimal encoding
learned for known environments while new neurons are plastic to adapt to
those features that are qualitatively new in a new environment. A simple
network simulation demonstrates that adding new plastic neurons is indeed
a successful strategy for adaptation without catastrophic interference
Coding and learning of chemosensor array patterns in a neurodynamic model of the olfactory system
Arrays of broadly-selective chemical sensors, also known as electronic noses, have been developed during the past two decades as a low-cost and high-throughput alternative to analytical instruments for the measurement of odorant chemicals. Signal processing in these gas-sensor arrays has been traditionally performed by means of statistical and neural pattern recognition techniques. The objective of this dissertation is to develop new computational models to process gas sensor array signals inspired by coding and learning mechanisms of the biological olfactory system. We have used a neurodynamic model of the olfactory system, the KIII, to develop and demonstrate four odor processing computational functions: robust recovery of overlapping patterns, contrast enhancement, background suppression, and novelty detection. First, a coding mechanism based on the synchrony of neural oscillations is used to extract information from the associative memory of the KIII model. This temporal code allows the KIII to recall overlapping patterns in a robust manner. Second, a new learning rule that combines Hebbian and anti-Hebbian terms is proposed. This learning rule is shown to achieve contrast enhancement on gas-sensor array patterns. Third, a new local learning mechanism based on habituation is proposed to perform odor background suppression. Combining the Hebbian/anti-Hebbian rule and the local habituation mechanism, the KIII is able to suppress the response to continuously presented odors, facilitating the detection of the new ones. Finally, a new learning mechanism based on anti-Hebbian learning is proposed to perform novelty detection. This learning mechanism allows the KIII to detect the introduction of new odors even in the presence of strong backgrounds. The four computational models are characterized with synthetic data and validated on gas sensor array patterns obtained from an e-nose prototype developed for this purpose
- …