Continual Learning methods are designed to learn new tasks without erasing
previous knowledge. However, Continual Learning often requires massive
computational power and storage capacity for satisfactory performance. In this
paper, we propose a resource-efficient continual learning method called the
Elastic Expansion Network (E2Net). Leveraging core subnet distillation and
precise replay sample selection, E2Net achieves superior average accuracy and
diminished forgetting within the same computational and storage constraints,
all while minimizing processing time. In E2Net, we propose Representative
Network Distillation to identify the representative core subnet by assessing
parameter quantity and output similarity with the working network, distilling
analogous subnets within the working network to mitigate reliance on rehearsal
buffers and facilitating knowledge transfer across previous tasks. To enhance
storage resource utilization, we then propose Subnet Constraint Experience
Replay to optimize rehearsal efficiency through a sample storage strategy based
on the structures of representative networks. Extensive experiments conducted
predominantly on cloud environments with diverse datasets and also spanning the
edge environment demonstrate that E2Net consistently outperforms
state-of-the-art methods. In addition, our method outperforms competitors in
terms of both storage and computational requirements