3D occupancy prediction is an emerging task that aims to estimate the
occupancy states and semantics of 3D scenes using multi-view images. However,
image-based scene perception encounters significant challenges in achieving
accurate prediction due to the absence of geometric priors. In this paper, we
address this issue by exploring cross-modal knowledge distillation in this
task, i.e., we leverage a stronger multi-modal model to guide the visual model
during training. In practice, we observe that directly applying features or
logits alignment, proposed and widely used in bird's-eyeview (BEV) perception,
does not yield satisfactory results. To overcome this problem, we introduce
RadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction.
By employing differentiable volume rendering, we generate depth and semantic
maps in perspective views and propose two novel consistency criteria between
the rendered outputs of teacher and student models. Specifically, the depth
consistency loss aligns the termination distributions of the rendered rays,
while the semantic consistency loss mimics the intra-segment similarity guided
by vision foundation models (VLMs). Experimental results on the nuScenes
dataset demonstrate the effectiveness of our proposed method in improving
various 3D occupancy prediction approaches, e.g., our proposed methodology
enhances our baseline by 2.2% in the metric of mIoU and achieves 50% in Occ3D
benchmark.Comment: Accepted by AAAI 202