Continually learning new classes from a few training examples without
forgetting previous old classes demands a flexible architecture with an
inevitably growing portion of storage, in which new examples and classes can be
incrementally stored and efficiently retrieved. One viable architectural
solution is to tightly couple a stationary deep neural network to a dynamically
evolving explicit memory (EM). As the centerpiece of this architecture, we
propose an EM unit that leverages energy-efficient in-memory compute (IMC)
cores during the course of continual learning operations. We demonstrate for
the first time how the EM unit can physically superpose multiple training
examples, expand to accommodate unseen classes, and perform similarity search
during inference, using operations on an IMC core based on phase-change memory
(PCM). Specifically, the physical superposition of a few encoded training
examples is realized via in-situ progressive crystallization of PCM devices.
The classification accuracy achieved on the IMC core remains within a range of
1.28%--2.5% compared to that of the state-of-the-art full-precision baseline
software model on both the CIFAR-100 and miniImageNet datasets when continually
learning 40 novel classes (from only five examples per class) on top of 60 old
classes.Comment: Accepted at the European Solid-state Devices and Circuits Conference
(ESSDERC), September 202