AutoML has demonstrated remarkable success in finding an effective neural
architecture for a given machine learning task defined by a specific dataset
and an evaluation metric. However, most present AutoML techniques consider each
task independently from scratch, which requires exploring many architectures,
leading to high computational cost. Here we propose AutoTransfer, an AutoML
solution that improves search efficiency by transferring the prior
architectural design knowledge to the novel task of interest. Our key
innovation includes a task-model bank that captures the model performance over
a diverse set of GNN architectures and tasks, and a computationally efficient
task embedding that can accurately measure the similarity among different
tasks. Based on the task-model bank and the task embeddings, we estimate the
design priors of desirable models of the novel task, by aggregating a
similarity-weighted sum of the top-K design distributions on tasks that are
similar to the task of interest. The computed design priors can be used with
any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the
graph machine learning domain. Experiments demonstrate that (i) our proposed
task embedding can be computed efficiently, and that tasks with similar
embeddings have similar best-performing architectures; (ii) AutoTransfer
significantly improves search efficiency with the transferred design priors,
reducing the number of explored architectures by an order of magnitude.
Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN
training information of 120,000 task-model combinations to facilitate and
inspire future research.Comment: ICLR 202