We introduce a novel superpoint-based transformer architecture for efficient
semantic segmentation of large-scale 3D scenes. Our method incorporates a fast
algorithm to partition point clouds into a hierarchical superpoint structure,
which makes our preprocessing 7 times faster than existing superpoint-based
approaches. Additionally, we leverage a self-attention mechanism to capture the
relationships between superpoints at multiple scales, leading to
state-of-the-art performance on three challenging benchmark datasets: S3DIS
(76.0% mIoU 6-fold validation), KITTI-360 (63.5% on Val), and DALES (79.6%).
With only 212k parameters, our approach is up to 200 times more compact than
other state-of-the-art models while maintaining similar performance.
Furthermore, our model can be trained on a single GPU in 3 hours for a fold of
the S3DIS dataset, which is 7x to 70x fewer GPU-hours than the best-performing
methods. Our code and models are accessible at
github.com/drprojects/superpoint_transformer.Comment: Accepted at ICCV 2023. Camera-ready version with Appendix. Code
available at github.com/drprojects/superpoint_transforme