2,237 research outputs found
Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization
Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenario
CORe50: a New Dataset and Benchmark for Continuous Object Recognition
Continuous/Lifelong learning of high-dimensional data streams is a
challenging research problem. In fact, fully retraining models each time new
data become available is infeasible, due to computational and storage issues,
while na\"ive incremental strategies have been shown to suffer from
catastrophic forgetting. In the context of real-world object recognition
applications (e.g., robotic vision), where continuous learning is crucial, very
few datasets and benchmarks are available to evaluate and compare emerging
techniques. In this work we propose a new dataset and benchmark CORe50,
specifically designed for continuous object recognition, and introduce baseline
approaches for different continuous learning scenarios
Short-term plasticity as cause-effect hypothesis testing in distal reward learning
Asynchrony, overlaps and delays in sensory-motor signals introduce ambiguity
as to which stimuli, actions, and rewards are causally related. Only the
repetition of reward episodes helps distinguish true cause-effect relationships
from coincidental occurrences. In the model proposed here, a novel plasticity
rule employs short and long-term changes to evaluate hypotheses on cause-effect
relationships. Transient weights represent hypotheses that are consolidated in
long-term memory only when they consistently predict or cause future rewards.
The main objective of the model is to preserve existing network topologies when
learning with ambiguous information flows. Learning is also improved by biasing
the exploration of the stimulus-response space towards actions that in the past
occurred before rewards. The model indicates under which conditions beliefs can
be consolidated in long-term memory, it suggests a solution to the
plasticity-stability dilemma, and proposes an interpretation of the role of
short-term plasticity.Comment: Biological Cybernetics, September 201
A Fusion of Variational Distribution Priors and Saliency Map Replay for Continual 3D Reconstruction
Single-image 3D reconstruction is a research challenge focused on predicting
3D object shapes from single-view images. This task requires significant data
acquisition to predict both visible and occluded portions of the shape.
Furthermore, learning-based methods face the difficulty of creating a
comprehensive training dataset for all possible classes. To this end, we
propose a continual learning-based 3D reconstruction method where our goal is
to design a model using Variational Priors that can still reconstruct the
previously seen classes reasonably even after training on new classes.
Variational Priors represent abstract shapes and combat forgetting, whereas
saliency maps preserve object attributes with less memory usage. This is vital
due to resource constraints in storing extensive training data. Additionally,
we introduce saliency map-based experience replay to capture global and
distinct object features. Thorough experiments show competitive results
compared to established methods, both quantitatively and qualitatively.Comment: 15 page
Continual Learning with Deep Architectures
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills.
However, despite early speculations and few pioneering works, very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous
intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this dissertation, we study the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI. We propose a comprehensive and unifying framework for continual learning, new metrics, benchmarks and algorithms, as well as providing substantial experimental evaluations in different supervised, unsupervised and reinforcement learning tasks
- …