Pre-training & fine-tuning is a prevalent paradigm in computer vision (CV).
Recently, parameter-efficient transfer learning (PETL) methods have shown
promising performance in adapting to downstream tasks with only a few trainable
parameters. Despite their success, the existing PETL methods in CV can be
computationally expensive and require large amounts of memory and time cost
during training, which limits low-resource users from conducting research and
applications on large models. In this work, we propose Parameter, Memory, and
Time Efficient Visual Adapter (E3VA) tuning to address this issue.
We provide a gradient backpropagation highway for low-rank adapters which
eliminates the need for expensive backpropagation through the frozen
pre-trained model, resulting in substantial savings of training memory and
training time. Furthermore, we optimise the E3VA structure for CV
tasks to promote model performance. Extensive experiments on COCO, ADE20K, and
Pascal VOC benchmarks show that E3VA can save up to 62.2% training
memory and 26.2% training time on average, while achieving comparable
performance to full fine-tuning and better performance than most PETL methods.
Note that we can even train the Swin-Large-based Cascade Mask RCNN on GTX
1080Ti GPUs with less than 1.5% trainable parameters.Comment: 14 pages, 4 figures, 5 tables, Submitted to NeurIPS202