Monocular 3D object detection (Mono3D) in mobile settings (e.g., on a
vehicle, a drone, or a robot) is an important yet challenging task. Due to the
near-far disparity phenomenon of monocular vision and the ever-changing camera
pose, it is hard to acquire high detection accuracy, especially for far
objects. Inspired by the insight that the depth of an object can be well
determined according to the depth of the ground where it stands, in this paper,
we propose a novel Mono3D framework, called MoGDE, which constantly estimates
the corresponding ground depth of an image and then utilizes the estimated
ground depth information to guide Mono3D. To this end, we utilize a pose
detection network to estimate the pose of the camera and then construct a
feature map portraying pixel-level ground depth according to the 3D-to-2D
perspective geometry. Moreover, to improve Mono3D with the estimated ground
depth, we design an RGB-D feature fusion network based on the transformer
structure, where the long-range self-attention mechanism is utilized to
effectively identify ground-contacting points and pin the corresponding ground
depth to the image feature map. We conduct extensive experiments on the
real-world KITTI dataset. The results demonstrate that MoGDE can effectively
improve the Mono3D accuracy and robustness for both near and far objects. MoGDE
yields the best performance compared with the state-of-the-art methods by a
large margin and is ranked number one on the KITTI 3D benchmark.Comment: 36th Conference on Neural Information Processing Systems (NeurIPS),
2022. arXiv admin note: text overlap with arXiv:2303.1301