Image compression emerges as a pivotal tool in the efficient handling and
transmission of digital images. Its ability to substantially reduce file size
not only facilitates enhanced data storage capacity but also potentially brings
advantages to the development of continual machine learning (ML) systems, which
learn new knowledge incrementally from sequential data. Continual ML systems
often rely on storing representative samples, also known as exemplars, within a
limited memory constraint to maintain the performance on previously learned
data. These methods are known as memory replay-based algorithms and have proven
effective at mitigating the detrimental effects of catastrophic forgetting.
Nonetheless, the limited memory buffer size often falls short of adequately
representing the entire data distribution. In this paper, we explore the use of
image compression as a strategy to enhance the buffer's capacity, thereby
increasing exemplar diversity. However, directly using compressed exemplars
introduces domain shift during continual ML, marked by a discrepancy between
compressed training data and uncompressed testing data. Additionally, it is
essential to determine the appropriate compression algorithm and select the
most effective rate for continual ML systems to balance the trade-off between
exemplar quality and quantity. To this end, we introduce a new framework to
incorporate image compression for continual ML including a pre-processing data
compression step and an efficient compression rate/algorithm selection method.
We conduct extensive experiments on CIFAR-100 and ImageNet datasets and show
that our method significantly improves image classification accuracy in
continual ML settings.Comment: Picture Coding Symposium (PCS) 202