13 research outputs found
Generative Continual Concept Learning
After learning a concept, humans are also able to continually generalize
their learned concepts to new domains by observing only a few labeled instances
without any interference with the past learned knowledge. In contrast, learning
concepts efficiently in a continual learning setting remains an open challenge
for current Artificial Intelligence algorithms as persistent model retraining
is necessary. Inspired by the Parallel Distributed Processing learning and the
Complementary Learning Systems theories, we develop a computational model that
is able to expand its previously learned concepts efficiently to new domains
using a few labeled samples. We couple the new form of a concept to its past
learned forms in an embedding space for effective continual learning. Doing so,
a generative distribution is learned such that it is shared across the tasks in
the embedding space and models the abstract concepts. This procedure enables
the model to generate pseudo-data points to replay the past experience to
tackle catastrophic forgetting