In the field of monocular 3D detection, it is common practice to utilize
scene geometric clues to enhance the detector's performance. However, many
existing works adopt these clues explicitly such as estimating a depth map and
back-projecting it into 3D space. This explicit methodology induces sparsity in
3D representations due to the increased dimensionality from 2D to 3D, and leads
to substantial information loss, especially for distant and occluded objects.
To alleviate this issue, we propose MonoNeRD, a novel detection framework that
can infer dense 3D geometry and occupancy. Specifically, we model scenes with
Signed Distance Functions (SDF), facilitating the production of dense 3D
representations. We treat these representations as Neural Radiance Fields
(NeRF) and then employ volume rendering to recover RGB images and depth maps.
To the best of our knowledge, this work is the first to introduce volume
rendering for M3D, and demonstrates the potential of implicit reconstruction
for image-based 3D perception. Extensive experiments conducted on the KITTI-3D
benchmark and Waymo Open Dataset demonstrate the effectiveness of MonoNeRD.
Codes are available at https://github.com/cskkxjk/MonoNeRD.Comment: Accepted by ICCV 202