5 research outputs found
Anisotropic Convolutional Networks for 3D Semantic Scene Completion
As a voxel-wise labeling task, semantic scene completion (SSC) tries to
simultaneously infer the occupancy and semantic labels for a scene from a
single depth and/or RGB image. The key challenge for SSC is how to effectively
take advantage of the 3D context to model various objects or stuffs with severe
variations in shapes, layouts and visibility. To handle such variations, we
propose a novel module called anisotropic convolution, which properties with
flexibility and power impossible for the competing methods such as standard 3D
convolution and some of its variations. In contrast to the standard 3D
convolution that is limited to a fixed 3D receptive field, our module is
capable of modeling the dimensional anisotropy voxel-wisely. The basic idea is
to enable anisotropic 3D receptive field by decomposing a 3D convolution into
three consecutive 1D convolutions, and the kernel size for each such 1D
convolution is adaptively determined on the fly. By stacking multiple such
anisotropic convolution modules, the voxel-wise modeling capability can be
further enhanced while maintaining a controllable amount of model parameters.
Extensive experiments on two SSC benchmarks, NYU-Depth-v2 and NYUCAD, show the
superior performance of the proposed method. Our code is available at
https://waterljwant.github.io/SSC
S3CNet: A Sparse Semantic Scene Completion Network for LiDAR Point Clouds
With the increasing reliance of self-driving and similar robotic systems on
robust 3D vision, the processing of LiDAR scans with deep convolutional neural
networks has become a trend in academia and industry alike. Prior attempts on
the challenging Semantic Scene Completion task - which entails the inference of
dense 3D structure and associated semantic labels from "sparse" representations
- have been, to a degree, successful in small indoor scenes when provided with
dense point clouds or dense depth maps often fused with semantic segmentation
maps from RGB images. However, the performance of these systems drop
drastically when applied to large outdoor scenes characterized by dynamic and
exponentially sparser conditions. Likewise, processing of the entire sparse
volume becomes infeasible due to memory limitations and workarounds introduce
computational inefficiency as practitioners are forced to divide the overall
volume into multiple equal segments and infer on each individually, rendering
real-time performance impossible. In this work, we formulate a method that
subsumes the sparsity of large-scale environments and present S3CNet, a sparse
convolution based neural network that predicts the semantically completed scene
from a single, unified LiDAR point cloud. We show that our proposed method
outperforms all counterparts on the 3D task, achieving state-of-the art results
on the SemanticKITTI benchmark. Furthermore, we propose a 2D variant of S3CNet
with a multi-view fusion strategy to complement our 3D network, providing
robustness to occlusions and extreme sparsity in distant regions. We conduct
experiments for the 2D semantic scene completion task and compare the results
of our sparse 2D network against several leading LiDAR segmentation models
adapted for bird's eye view segmentation on two open-source datasets.Comment: 14 page