In this paper, we introduce variational semantic memory into meta-learning to
acquire long-term knowledge for few-shot learning. The variational semantic
memory accrues and stores semantic information for the probabilistic inference
of class prototypes in a hierarchical Bayesian framework. The semantic memory
is grown from scratch and gradually consolidated by absorbing information from
tasks it experiences. By doing so, it is able to accumulate long-term, general
knowledge that enables it to learn new concepts of objects. We formulate memory
recall as the variational inference of a latent memory variable from addressed
contents, which offers a principled way to adapt the knowledge to individual
tasks. Our variational semantic memory, as a new long-term memory module,
confers principled recall and update mechanisms that enable semantic
information to be efficiently accrued and adapted for few-shot learning.
Experiments demonstrate that the probabilistic modelling of prototypes achieves
a more informative representation of object classes compared to deterministic
vectors. The consistent new state-of-the-art performance on four benchmarks
shows the benefit of variational semantic memory in boosting few-shot
recognition.Comment: accepted to NeurIPS 2020; code is available in
https://github.com/YDU-uva/VS