Detection Transformer (DETR) directly transforms queries to unique objects by
using one-to-one bipartite matching during training and enables end-to-end
object detection. Recently, these models have surpassed traditional detectors
on COCO with undeniable elegance. However, they differ from traditional
detectors in multiple designs, including model architecture and training
schedules, and thus the effectiveness of one-to-one matching is not fully
understood. In this work, we conduct a strict comparison between the one-to-one
Hungarian matching in DETRs and the one-to-many label assignments in
traditional detectors with non-maximum supervision (NMS). Surprisingly, we
observe one-to-many assignments with NMS consistently outperform standard
one-to-one matching under the same setting, with a significant gain of up to
2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based
label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with
ResNet50 backbone, outperforming all existing traditional or transformer-based
detectors in this setting. On multiple datasets, schedules, and architectures,
we consistently show bipartite matching is unnecessary for performant detection
transformers. Furthermore, we attribute the success of detection transformers
to their expressive transformer architecture. Code is available at
https://github.com/jozhang97/DETA.Comment: Code is available at https://github.com/jozhang97/DET