Optimization-based meta-learning aims to learn an initialization so that a
new unseen task can be learned within a few gradient updates. Model Agnostic
Meta-Learning (MAML) is a benchmark algorithm comprising two optimization
loops. The inner loop is dedicated to learning a new task and the outer loop
leads to meta-initialization. However, ANIL (almost no inner loop) algorithm
shows that feature reuse is an alternative to rapid learning in MAML. Thus, the
meta-initialization phase makes MAML primed for feature reuse and obviates the
need for rapid learning. Contrary to ANIL, we hypothesize that there may be a
need to learn new features during meta-testing. A new unseen task from
non-similar distribution would necessitate rapid learning in addition reuse and
recombination of existing features. In this paper, we invoke the width-depth
duality of neural networks, wherein, we increase the width of the network by
adding extra computational units (ACU). The ACUs enable the learning of new
atomic features in the meta-testing task, and the associated increased width
facilitates information propagation in the forwarding pass. The newly learnt
features combine with existing features in the last layer for meta-learning.
Experimental results show that our proposed MAC method outperformed existing
ANIL algorithm for non-similar task distribution by approximately 13% (5-shot
task setting)Comment: 20 pages, 3 figures, 2 graph