2 research outputs found
Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How
With the ever-increasing number of pretrained models, machine learning
practitioners are continuously faced with which pretrained model to use, and
how to finetune it for a new dataset. In this paper, we propose a methodology
that jointly searches for the optimal pretrained model and the hyperparameters
for finetuning it. Our method transfers knowledge about the performance of many
pretrained models with multiple hyperparameter configurations on a series of
datasets. To this aim, we evaluated over 20k hyperparameter configurations for
finetuning 24 pretrained image classification models on 87 datasets to generate
a large-scale meta-dataset. We meta-learn a multi-fidelity performance
predictor on the learning curves of this meta-dataset and use it for fast
hyperparameter optimization on new datasets. We empirically demonstrate that
our resulting approach can quickly select an accurate pretrained model for a
new dataset together with its optimal hyperparameters