Continual learning aims to learn a model from a continuous stream of data,
but it mainly assumes a fixed number of data and tasks with clear task
boundaries. However, in real-world scenarios, the number of input data and
tasks is constantly changing in a statistical way, not a static way. Although
recently introduced incremental learning scenarios having blurry task
boundaries somewhat address the above issues, they still do not fully reflect
the statistical properties of real-world situations because of the fixed ratio
of disjoint and blurry samples. In this paper, we propose a new Stochastic
incremental Blurry task boundary scenario, called Si-Blurry, which reflects the
stochastic properties of the real-world. We find that there are two major
challenges in the Si-Blurry scenario: (1) inter- and intra-task forgettings and
(2) class imbalance problem. To alleviate them, we introduce Mask and Visual
Prompt tuning (MVP). In MVP, to address the inter- and intra-task forgetting
issues, we propose a novel instance-wise logit masking and contrastive visual
prompt tuning loss. Both of them help our model discern the classes to be
learned in the current batch. It results in consolidating the previous
knowledge. In addition, to alleviate the class imbalance problem, we introduce
a new gradient similarity-based focal loss and adaptive feature scaling to ease
overfitting to the major classes and underfitting to the minor classes.
Extensive experiments show that our proposed MVP significantly outperforms the
existing state-of-the-art methods in our challenging Si-Blurry scenario