9,566 research outputs found
An Introduction to Lifelong Supervised Learning
This primer is an attempt to provide a detailed summary of the different
facets of lifelong learning. We start with Chapter 2 which provides a
high-level overview of lifelong learning systems. In this chapter, we discuss
prominent scenarios in lifelong learning (Section 2.4), provide 8 Introduction
a high-level organization of different lifelong learning approaches (Section
2.5), enumerate the desiderata for an ideal lifelong learning system (Section
2.6), discuss how lifelong learning is related to other learning paradigms
(Section 2.7), describe common metrics used to evaluate lifelong learning
systems (Section 2.8). This chapter is more useful for readers who are new to
lifelong learning and want to get introduced to the field without focusing on
specific approaches or benchmarks. The remaining chapters focus on specific
aspects (either learning algorithms or benchmarks) and are more useful for
readers who are looking for specific approaches or benchmarks. Chapter 3
focuses on regularization-based approaches that do not assume access to any
data from previous tasks. Chapter 4 discusses memory-based approaches that
typically use a replay buffer or an episodic memory to save subset of data
across different tasks. Chapter 5 focuses on different architecture families
(and their instantiations) that have been proposed for training lifelong
learning systems. Following these different classes of learning algorithms, we
discuss the commonly used evaluation benchmarks and metrics for lifelong
learning (Chapter 6) and wrap up with a discussion of future challenges and
important research directions in Chapter 7.Comment: Lifelong Learning Prime
Continual Reinforcement Learning in 3D Non-stationary Environments
High-dimensional always-changing environments constitute a hard challenge for
current reinforcement learning techniques. Artificial agents, nowadays, are
often trained off-line in very static and controlled conditions in simulation
such that training observations can be thought as sampled i.i.d. from the
entire observations space. However, in real world settings, the environment is
often non-stationary and subject to unpredictable, frequent changes. In this
paper we propose and openly release CRLMaze, a new benchmark for learning
continually through reinforcement in a complex 3D non-stationary task based on
ViZDoom and subject to several environmental changes. Then, we introduce an
end-to-end model-free continual reinforcement learning strategy showing
competitive results with respect to four different baselines and not requiring
any access to additional supervised signals, previously encountered
environmental conditions or observations.Comment: Accepted in the CLVision Workshop at CVPR2020: 13 pages, 4 figures, 5
table
Continual Lifelong Learning with Neural Networks: A Review
Humans and animals have the ability to continually acquire, fine-tune, and
transfer knowledge and skills throughout their lifespan. This ability, referred
to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms
that together contribute to the development and specialization of our
sensorimotor skills as well as to long-term memory consolidation and retrieval.
Consequently, lifelong learning capabilities are crucial for autonomous agents
interacting in the real world and processing continuous streams of information.
However, lifelong learning remains a long-standing challenge for machine
learning and neural network models since the continual acquisition of
incrementally available information from non-stationary data distributions
generally leads to catastrophic forgetting or interference. This limitation
represents a major drawback for state-of-the-art deep neural network models
that typically learn representations from stationary batches of training data,
thus without accounting for situations in which information becomes
incrementally available over time. In this review, we critically summarize the
main challenges linked to lifelong learning for artificial learning systems and
compare existing neural network approaches that alleviate, to different
extents, catastrophic forgetting. We discuss well-established and emerging
research motivated by lifelong learning factors in biological systems such as
structural plasticity, memory replay, curriculum and transfer learning,
intrinsic motivation, and multisensory integration
- …