7,979 research outputs found
Meta-Learning with Context-Agnostic Initialisations
Meta-learning approaches have addressed few-shot problems by finding
initialisations suited for fine-tuning to target tasks. Often there are
additional properties within training data (which we refer to as context), not
relevant to the target task, which act as a distractor to meta-learning,
particularly when the target task contains examples from a novel context not
seen during training. We address this oversight by incorporating a
context-adversarial component into the meta-learning process. This produces an
initialisation for fine-tuning to target which is both context-agnostic and
task-generalised. We evaluate our approach on three commonly used meta-learning
algorithms and two problems. We demonstrate our context-agnostic meta-learning
improves results in each case. First, we report on Omniglot few-shot character
classification, using alphabets as context. An average improvement of 4.3% is
observed across methods and tasks when classifying characters from an unseen
alphabet. Second, we evaluate on a dataset for personalised energy expenditure
predictions from video, using participant knowledge as context. We demonstrate
that context-agnostic meta-learning decreases the average mean square error by
30%
EMO: Episodic Memory Optimization for Few-Shot Meta-Learning
Few-shot meta-learning presents a challenge for gradient descent optimization
due to the limited number of training samples per task. To address this issue,
we propose an episodic memory optimization for meta-learning, we call
\emph{EMO}, which is inspired by the human ability to recall past learning
experiences from the brain's memory. EMO retains the gradient history of past
experienced tasks in external memory, enabling few-shot learning in a
memory-augmented way. By learning to retain and recall the learning process of
past training tasks, EMO nudges parameter updates in the right direction, even
when the gradients provided by a limited number of examples are uninformative.
We prove theoretically that our algorithm converges for smooth, strongly convex
objectives. EMO is generic, flexible, and model-agnostic, making it a simple
plug-and-play optimizer that can be seamlessly embedded into existing
optimization-based few-shot meta-learning approaches. Empirical results show
that EMO scales well with most few-shot classification benchmarks and improves
the performance of optimization-based meta-learning methods, resulting in
accelerated convergence.Comment: Accepted by CoLLAs 202
Multi-Modal Fusion by Meta-Initialization
When experience is scarce, models may have insufficient information to adapt
to a new task. In this case, auxiliary information - such as a textual
description of the task - can enable improved task inference and adaptation. In
this work, we propose an extension to the Model-Agnostic Meta-Learning
algorithm (MAML), which allows the model to adapt using auxiliary information
as well as task experience. Our method, Fusion by Meta-Initialization (FuMI),
conditions the model initialization on auxiliary information using a
hypernetwork, rather than learning a single, task-agnostic initialization.
Furthermore, motivated by the shortcomings of existing multi-modal few-shot
learning benchmarks, we constructed iNat-Anim - a large-scale image
classification dataset with succinct and visually pertinent textual class
descriptions. On iNat-Anim, FuMI significantly outperforms uni-modal baselines
such as MAML in the few-shot regime. The code for this project and a dataset
exploration tool for iNat-Anim are publicly available at
https://github.com/s-a-malik/multi-few .Comment: The first two authors contributed equall
Graph Neural Network Expressivity and Meta-Learning for Molecular Property Regression
We demonstrate the applicability of model-agnostic algorithms for
meta-learning, specifically Reptile, to GNN models in molecular regression
tasks. Using meta-learning we are able to learn new chemical prediction tasks
with only a few model updates, as compared to using randomly initialized GNNs
which require learning each regression task from scratch. We experimentally
show that GNN layer expressivity is correlated to improved meta-learning.
Additionally, we also experiment with GNN emsembles which yield best
performance and rapid convergence for k-shot learning
- …