30,145 research outputs found
Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization
Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenario
Learning Community Group Concept Mapping: Fall 2014 Outreach and Recruitment, Spring 2015 Case Management and Service Delivery. Final Reports
Beginning in 2014, the Federal Government provided funding to New York State as part of an initiative to improve services that lead to sustainable outcomes for youth receiving Supplemental Security Income (SSI) benefits. As part of the NYS PROMISE initiative, Concept Systems, Inc. worked with the Learning Community to develop learning needs frameworks using the Group Concept Mapping methodology (GCM). This GCM project gathers, aggregates, and integrates the specific knowledge and opinions of the Learning Community members and allows for their guidance and involvement in supporting NYS PROMISE as a viable community of practice. This work also increases the responsiveness of NYS PROMISE to the Learning Community members’ needs by inspiring discussion during the semi-annual in-person meetings. As of the end of year two, two GCM projects have been completed with the PROMISE Learning Community. These projects focused on Outreach and Recruitment and Case Management and Service Delivery. This report discusses the data collection method and participation in both GCM projects, as well as providing graphics, statistical reports, and a summary of the analysis. In this report we refer to the Fall 2014 project as Project 1, and the Spring 2015 project as Project 2
Reducing Catastrophic Forgetting in Self-Organizing Maps
An agent that is capable of continual or lifelong learning is able to continuously learn from potentially infinite streams of pattern sensory data. One major historic difficulty in building agents capable of such learning is that neural systems struggle to retain previously-acquired knowledge when learning from new data samples. This problem is known as catastrophic forgetting and remains an unsolved problem in the domain of machine learning to this day. To overcome catastrophic forgetting, different approaches have been proposed. One major line of thought advocates the use of memory buffers to store data where the stored data is then used to randomly retrain the model to improve memory retention. However, storing and giving access to previous physical data points results in a variety of practical difficulties particularly with respect to growing memory storage costs. In this work, we propose an alternative way to tackle the problem of catastrophic forgetting, inspired by and building on top of a classical neural model, the self-organizing map (SOM) which is a form of unsupervised clustering. Although the SOM has the potential to combat forgetting through the use of pattern-specializing units, we uncover that it too suffers from the same problem and this forgetting becomes worse when the SOM is trained in a task incremental fashion. To mitigate this, we propose a generalization of the SOM, the continual SOM (c-SOM), which introduces several novel mechanisms to improve its memory retention -- new decay functions and generative resampling schemes to facilitate generative replay in the model. We perform extensive experiments using split-MNIST with these approaches, demonstrating that the c-SOM significantly improves over the classical SOM. Additionally, we come up with a new performance metric alpha_mem to measure the efficacy of SOMs trained in a task incremental fashion, providing a benchmark for other competitive learning models
Recommended from our members
The Collective Building of Knowledge in Collaborative Learning Environments
The intention of this chapter is to investigate how collaborative learning environments (CLEs) can be used to elicit the collective building of knowledge. This work discusses CLEs as lively cognitive systems and looks at some strategies that might contribute to the improvement of significant pedagogical practices. The study is supported by rhizome principles, whose characteristics allow us to understand the process of selecting and connecting what is relevant and meaningful for the collective building of knowledge. A brief theoretical and conceptual approach is presented and major contributions and difficulties about collaborative learning environments are discussed. New questions and future trends about the collective building of knowledge are suggested
START: A Bridge between Emotion Theory and Neurobiology through Dynamic System Modeling
Lewis proposes "reconceptualization" (p. 1) of how to link the psychology and neurobiology of emotion and cognitive-emotional interactions. His main proposed themes have actually been actively and quantitatively developed in the neural modeling literature for over thirty years. This commentary summarizes some of these themes and points to areas of particularly active research in this area
Methods of Hierarchical Clustering
We survey agglomerative hierarchical clustering algorithms and discuss
efficient implementations that are available in R and other software
environments. We look at hierarchical self-organizing maps, and mixture models.
We review grid-based clustering, focusing on hierarchical density-based
approaches. Finally we describe a recently developed very efficient (linear
time) hierarchical clustering algorithm, which can also be viewed as a
hierarchical grid-based algorithm.Comment: 21 pages, 2 figures, 1 table, 69 reference
- …