Perception is crucial in the realm of autonomous driving systems, where
bird's eye view (BEV)-based architectures have recently reached
state-of-the-art performance. The desirability of self-supervised
representation learning stems from the expensive and laborious process of
annotating 2D and 3D data. Although previous research has investigated
pretraining methods for both LiDAR and camera-based 3D object detection, a
unified pretraining framework for multimodal BEV perception is missing. In this
study, we introduce CALICO, a novel framework that applies contrastive
objectives to both LiDAR and camera backbones. Specifically, CALICO
incorporates two stages: point-region contrast (PRC) and region-aware
distillation (RAD). PRC better balances the region- and scene-level
representation learning on the LiDAR modality and offers significant
performance improvement compared to existing methods. RAD effectively achieves
contrastive distillation on our self-trained teacher model. CALICO's efficacy
is substantiated by extensive evaluations on 3D object detection and BEV map
segmentation tasks, where it delivers significant performance improvements.
Notably, CALICO outperforms the baseline method by 10.5% and 8.6% on NDS and
mAP. Moreover, CALICO boosts the robustness of multimodal 3D object detection
against adversarial attacks and corruption. Additionally, our framework can be
tailored to different backbones and heads, positioning it as a promising
approach for multimodal BEV perception