Meta learning have achieved promising performance in low-resource text
classification which aims to identify target classes with knowledge transferred
from source classes with sets of small tasks named episodes. However, due to
the limited training data in the meta-learning scenario and the inherent
properties of parameterized neural networks, poor generalization performance
has become a pressing problem that needs to be addressed. To deal with this
issue, we propose a meta-learning based method called Retrieval-Augmented Meta
Learning(RAML). It not only uses parameterization for inference but also
retrieves non-parametric knowledge from an external corpus to make inferences,
which greatly alleviates the problem of poor generalization performance caused
by the lack of diverse training data in meta-learning. This method differs from
previous models that solely rely on parameters, as it explicitly emphasizes the
importance of non-parametric knowledge, aiming to strike a balance between
parameterized neural networks and non-parametric knowledge. The model is
required to determine which knowledge to access and utilize during inference.
Additionally, our multi-view passages fusion network module can effectively and
efficiently integrate the retrieved information into low-resource
classification task. The extensive experiments demonstrate that RAML
significantly outperforms current SOTA low-resource text classification models.Comment: Under Revie