60 research outputs found
SEPT: Towards Scalable and Efficient Visual Pre-Training
Recently, the self-supervised pre-training paradigm has shown great potential
in leveraging large-scale unlabeled data to improve downstream task
performance. However, increasing the scale of unlabeled pre-training data in
real-world scenarios requires prohibitive computational costs and faces the
challenge of uncurated samples. To address these issues, we build a
task-specific self-supervised pre-training framework from a data selection
perspective based on a simple hypothesis that pre-training on the unlabeled
samples with similar distribution to the target task can bring substantial
performance gains. Buttressed by the hypothesis, we propose the first yet novel
framework for Scalable and Efficient visual Pre-Training (SEPT) by introducing
a retrieval pipeline for data selection. SEPT first leverage a self-supervised
pre-trained model to extract the features of the entire unlabeled dataset for
retrieval pipeline initialization. Then, for a specific target task, SEPT
retrievals the most similar samples from the unlabeled dataset based on feature
similarity for each target instance for pre-training. Finally, SEPT pre-trains
the target model with the selected unlabeled samples in a self-supervised
manner for target data finetuning. By decoupling the scale of pre-training and
available upstream data for a target task, SEPT achieves high scalability of
the upstream dataset and high efficiency of pre-training, resulting in high
model architecture flexibility. Results on various downstream tasks demonstrate
that SEPT can achieve competitive or even better performance compared with
ImageNet pre-training while reducing the size of training samples by one
magnitude without resorting to any extra annotations.Comment: Accepted by AAAI 202
- …