The transformation of features from 2D perspective space to 3D space is
essential to multi-view 3D object detection. Recent approaches mainly focus on
the design of view transformation, either pixel-wisely lifting perspective view
features into 3D space with estimated depth or grid-wisely constructing BEV
features via 3D projection, treating all pixels or grids equally. However,
choosing what to transform is also important but has rarely been discussed
before. The pixels of a moving car are more informative than the pixels of the
sky. To fully utilize the information contained in images, the view
transformation should be able to adapt to different image regions according to
their contents. In this paper, we propose a novel framework named
FrustumFormer, which pays more attention to the features in instance regions
via adaptive instance-aware resampling. Specifically, the model obtains
instance frustums on the bird's eye view by leveraging image view object
proposals. An adaptive occupancy mask within the instance frustum is learned to
refine the instance location. Moreover, the temporal frustum intersection could
further reduce the localization uncertainty of objects. Comprehensive
experiments on the nuScenes dataset demonstrate the effectiveness of
FrustumFormer, and we achieve a new state-of-the-art performance on the
benchmark. Codes and models will be made available at
https://github.com/Robertwyq/Frustum.Comment: Accepted to CVPR 202