Accurate classification of white blood cells in peripheral blood is essential
for diagnosing hematological diseases. Due to constantly evolving clinical
settings, data sources, and disease classifications, it is necessary to update
machine learning classification models regularly for practical real-world use.
Such models significantly benefit from sequentially learning from incoming data
streams without forgetting previously acquired knowledge. However, models can
suffer from catastrophic forgetting, causing a drop in performance on previous
tasks when fine-tuned on new data. Here, we propose a rehearsal-based continual
learning approach for class incremental and domain incremental scenarios in
white blood cell classification. To choose representative samples from previous
tasks, we employ exemplar set selection based on the model's predictions. This
involves selecting the most confident samples and the most challenging samples
identified through uncertainty estimation of the model. We thoroughly evaluated
our proposed approach on three white blood cell classification datasets that
differ in color, resolution, and class composition, including scenarios where
new domains or new classes are introduced to the model with every task. We also
test a long class incremental experiment with both new domains and new classes.
Our results demonstrate that our approach outperforms established baselines in
continual learning, including existing iCaRL and EWC methods for classifying
white blood cells in cross-domain environments.Comment: Accepted for publication at workshop on Domain Adaptation and
Representation Transfer (DART) in International Conference on Medical Image
Computing and Computer Assisted Intervention (MICCAI 2023