The paradigm of large-scale pre-training followed by downstream fine-tuning
has been widely employed in various object detection algorithms. In this paper,
we reveal discrepancies in data, model, and task between the pre-training and
fine-tuning procedure in existing practices, which implicitly limit the
detector's performance, generalization ability, and convergence speed. To this
end, we propose AlignDet, a unified pre-training framework that can be adapted
to various existing detectors to alleviate the discrepancies. AlignDet
decouples the pre-training process into two stages, i.e., image-domain and
box-domain pre-training. The image-domain pre-training optimizes the detection
backbone to capture holistic visual abstraction, and box-domain pre-training
learns instance-level semantics and task-aware concepts to initialize the parts
out of the backbone. By incorporating the self-supervised pre-trained
backbones, we can pre-train all modules for various detectors in an
unsupervised paradigm. As depicted in Figure 1, extensive experiments
demonstrate that AlignDet can achieve significant improvements across diverse
protocols, such as detection algorithm, model backbone, data setting, and
training schedule. For example, AlignDet improves FCOS by 5.3 mAP, RetinaNet by
2.1 mAP, Faster R-CNN by 3.3 mAP, and DETR by 2.3 mAP under fewer epochs.Comment: Accepted by ICCV 2023. Code and Models are publicly available.
Project Page: https://liming-ai.github.io/AlignDe