1 research outputs found
Evaluating Online Continual Learning with CALM
Online Continual Learning (OCL) studies learning over a continuous data
stream without observing any single example more than once, a setting that is
closer to the experience of humans and systems that must learn "on-the-wild".
Yet, commonly available benchmarks are far from these real-world conditions,
because they explicitly signal different tasks, lack latent similarity
structure or assume temporal independence between different examples. Here, we
propose a new benchmark for OCL based on language modelling in which input
alternates between different languages and domains without any explicit
delimitation. Additionally, we propose new metrics to study catastrophic
forgetting in this setting and evaluate multiple baseline models based on
compositions of experts. Finally, we introduce a simple gating technique that
learns the latent similarities between different inputs, improving the
performance of a Products of Experts model