LiDAR-based 3D object detection, semantic segmentation, and panoptic
segmentation are usually implemented in specialized networks with distinctive
architectures that are difficult to adapt to each other. This paper presents
LidarMultiNet, a LiDAR-based multi-task network that unifies these three major
LiDAR perception tasks. Among its many benefits, a multi-task network can
reduce the overall cost by sharing weights and computation among multiple
tasks. However, it typically underperforms compared to independently combined
single-task models. The proposed LidarMultiNet aims to bridge the performance
gap between the multi-task network and multiple single-task networks. At the
core of LidarMultiNet is a strong 3D voxel-based encoder-decoder architecture
with a Global Context Pooling (GCP) module extracting global contextual
features from a LiDAR frame. Task-specific heads are added on top of the
network to perform the three LiDAR perception tasks. More tasks can be
implemented simply by adding new task-specific heads while introducing little
additional cost. A second stage is also proposed to refine the first-stage
segmentation and generate accurate panoptic segmentation results. LidarMultiNet
is extensively tested on both Waymo Open Dataset and nuScenes dataset,
demonstrating for the first time that major LiDAR perception tasks can be
unified in a single strong network that is trained end-to-end and achieves
state-of-the-art performance. Notably, LidarMultiNet reaches the official 1st
place in the Waymo Open Dataset 3D semantic segmentation challenge 2022 with
the highest mIoU and the best accuracy for most of the 22 classes on the test
set, using only LiDAR points as input. It also sets the new state-of-the-art
for a single model on the Waymo 3D object detection benchmark and three
nuScenes benchmarks.Comment: Full-length paper extending our previous technical report of the 1st
place solution of the 2022 Waymo Open Dataset 3D Semantic Segmentation
challenge, including evaluations on 5 major benchmarks. arXiv admin note:
text overlap with arXiv:2206.1142