8,990 research outputs found
Lifelong Generative Modeling
Lifelong learning is the problem of learning multiple consecutive tasks in a
sequential manner, where knowledge gained from previous tasks is retained and
used to aid future learning over the lifetime of the learner. It is essential
towards the development of intelligent machines that can adapt to their
surroundings. In this work we focus on a lifelong learning approach to
unsupervised generative modeling, where we continuously incorporate newly
observed distributions into a learned model. We do so through a student-teacher
Variational Autoencoder architecture which allows us to learn and preserve all
the distributions seen so far, without the need to retain the past data nor the
past models. Through the introduction of a novel cross-model regularizer,
inspired by a Bayesian update rule, the student model leverages the information
learned by the teacher, which acts as a probabilistic knowledge store. The
regularizer reduces the effect of catastrophic interference that appears when
we learn over sequences of distributions. We validate our model's performance
on sequential variants of MNIST, FashionMNIST, PermutedMNIST, SVHN and Celeb-A
and demonstrate that our model mitigates the effects of catastrophic
interference faced by neural networks in sequential learning scenarios.Comment: 32 page
Neural Topic Modeling with Continual Lifelong Learning
Lifelong learning has recently attracted attention in building machine
learning systems that continually accumulate and transfer knowledge to help
future learning. Unsupervised topic modeling has been popularly used to
discover topics from document collections. However, the application of topic
modeling is challenging due to data sparsity, e.g., in a small collection of
(short) documents and thus, generate incoherent topics and sub-optimal document
representations. To address the problem, we propose a lifelong learning
framework for neural topic modeling that can continuously process streams of
document collections, accumulate topics and guide future topic modeling tasks
by knowledge transfer from several sources to better deal with the sparse data.
In the lifelong process, we particularly investigate jointly: (1) sharing
generative homologies (latent topics) over lifetime to transfer prior
knowledge, and (2) minimizing catastrophic forgetting to retain the past
learning via novel selective data augmentation, co-training and topic
regularization approaches. Given a stream of document collections, we apply the
proposed Lifelong Neural Topic Modeling (LNTM) framework in modeling three
sparse document collections as future tasks and demonstrate improved
performance quantified by perplexity, topic coherence and information retrieval
task.Comment: ICML202
Scalable Recollections for Continual Lifelong Learning
Given the recent success of Deep Learning applied to a variety of single
tasks, it is natural to consider more human-realistic settings. Perhaps the
most difficult of these settings is that of continual lifelong learning, where
the model must learn online over a continuous stream of non-stationary data. A
successful continual lifelong learning system must have three key capabilities:
it must learn and adapt over time, it must not forget what it has learned, and
it must be efficient in both training time and memory. Recent techniques have
focused their efforts primarily on the first two capabilities while questions
of efficiency remain largely unexplored. In this paper, we consider the problem
of efficient and effective storage of experiences over very large time-frames.
In particular we consider the case where typical experiences are O(n) bits and
memories are limited to O(k) bits for k << n. We present a novel scalable
architecture and training algorithm in this challenging domain and provide an
extensive evaluation of its performance. Our results show that we can achieve
considerable gains on top of state-of-the-art methods such as GEM.Comment: AAAI 201
Continual Classification Learning Using Generative Models
Continual learning is the ability to sequentially learn over time by
accommodating knowledge while retaining previously learned experiences. Neural
networks can learn multiple tasks when trained on them jointly, but cannot
maintain performance on previously learned tasks when tasks are presented one
at a time. This problem is called catastrophic forgetting. In this work, we
propose a classification model that learns continuously from sequentially
observed tasks, while preventing catastrophic forgetting. We build on the
lifelong generative capabilities of [10] and extend it to the classification
setting by deriving a new variational bound on the joint log likelihood, .Comment: 5 pages, 4 figures, under review in Continual learning Workshop NIPS
201
Learning Independent Causal Mechanisms
Statistical learning relies upon data sampled from a distribution, and we
usually do not care what actually generated it in the first place. From the
point of view of causal modeling, the structure of each distribution is induced
by physical mechanisms that give rise to dependences between observables.
Mechanisms, however, can be meaningful autonomous modules of generative models
that make sense beyond a particular entailed data distribution, lending
themselves to transfer between problems. We develop an algorithm to recover a
set of independent (inverse) mechanisms from a set of transformed data points.
The approach is unsupervised and based on a set of experts that compete for
data generated by the mechanisms, driving specialization. We analyze the
proposed method in a series of experiments on image data. Each expert learns to
map a subset of the transformed data back to a reference distribution. The
learned mechanisms generalize to novel domains. We discuss implications for
transfer learning and links to recent trends in generative modeling.Comment: ICML 201
- …