In this paper, we present DAT, a Depth-Aware Transformer framework designed
for camera-based 3D detection. Our model is based on observing two major issues
in existing methods: large depth translation errors and duplicate predictions
along depth axes. To mitigate these issues, we propose two key solutions within
DAT. To address the first issue, we introduce a Depth-Aware Spatial
Cross-Attention (DA-SCA) module that incorporates depth information into
spatial cross-attention when lifting image features to 3D space. To address the
second issue, we introduce an auxiliary learning task called Depth-aware
Negative Suppression loss. First, based on their reference points, we organize
features as a Bird's-Eye-View (BEV) feature map. Then, we sample positive and
negative features along each object ray that connects an object and a camera
and train the model to distinguish between them. The proposed DA-SCA and DNS
methods effectively alleviate these two problems. We show that DAT is a
versatile method that enhances the performance of all three popular models,
BEVFormer, DETR3D, and PETR. Our evaluation on BEVFormer demonstrates that DAT
achieves a significant improvement of +2.8 NDS on nuScenes val under the same
settings. Moreover, when using pre-trained VoVNet-99 as the backbone, DAT
achieves strong results of 60.0 NDS and 51.5 mAP on nuScenes test. Our code
will be soon.Comment: revisio