Online continual learning is a challenging problem where models must learn
from a non-stationary data stream while avoiding catastrophic forgetting.
Inter-class imbalance during training has been identified as a major cause of
forgetting, leading to model prediction bias towards recently learned classes.
In this paper, we theoretically analyze that inter-class imbalance is entirely
attributed to imbalanced class-priors, and the function learned from
intra-class intrinsic distributions is the Bayes-optimal classifier. To that
end, we present that a simple adjustment of model logits during training can
effectively resist prior class bias and pursue the corresponding Bayes-optimum.
Our proposed method, Logit Adjusted Softmax, can mitigate the impact of
inter-class imbalance not only in class-incremental but also in realistic
general setups, with little additional computational cost. We evaluate our
approach on various benchmarks and demonstrate significant performance
improvements compared to prior arts. For example, our approach improves the
best baseline by 4.6% on CIFAR10