1 research outputs found
Omnidirectional Transfer for Quasilinear Lifelong Learning
In biological learning, data are used to improve performance not only on the
current task, but also on previously encountered and as yet unencountered
tasks. In contrast, classical machine learning starts from a blank slate, or
tabula rasa, using data only for the single task at hand. While typical
transfer learning algorithms can improve performance on future tasks, their
performance on prior tasks degrades upon learning new tasks (called
catastrophic forgetting). Many recent approaches for continual or lifelong
learning have attempted to maintain performance given new tasks. But striving
to avoid forgetting sets the goal unnecessarily low: the goal of lifelong
learning, whether biological or artificial, should be to improve performance on
all tasks (including past and future) with any new data. We propose
omnidirectional transfer learning algorithms, which includes two special cases
of interest: decision forests and deep networks. Our key insight is the
development of the omni-voter layer, which ensembles representations learned
independently on all tasks to jointly decide how to proceed on any given new
data point, thereby improving performance on both past and future tasks. Our
algorithms demonstrate omnidirectional transfer in a variety of simulated and
real data scenarios, including tabular data, image data, spoken data, and
adversarial tasks. Moreover, they do so with quasilinear space and time
complexity