183 research outputs found
Attention-based Multi-modal Fusion Network for Semantic Scene Completion
This paper presents an end-to-end 3D convolutional network named
attention-based multi-modal fusion network (AMFNet) for the semantic scene
completion (SSC) task of inferring the occupancy and semantic labels of a
volumetric 3D scene from single-view RGB-D images. Compared with previous
methods which use only the semantic features extracted from RGB-D images, the
proposed AMFNet learns to perform effective 3D scene completion and semantic
segmentation simultaneously via leveraging the experience of inferring 2D
semantic segmentation from RGB-D images as well as the reliable depth cues in
spatial dimension. It is achieved by employing a multi-modal fusion
architecture boosted from 2D semantic segmentation and a 3D semantic completion
network empowered by residual attention blocks. We validate our method on both
the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset and the results
show that our method respectively achieves the gains of 2.5% and 2.6% on the
synthetic SUNCG-RGBD dataset and the real NYUv2 dataset against the
state-of-the-art method.Comment: Accepted by AAAI 202
Real-time 3D Semantic Scene Completion Via Feature Aggregation and Conditioned Prediction
Semantic Scene Completion (SSC) aims to simultaneously predict the volumetric
occupancy and semantic category of a 3D scene. In this paper, we propose a
real-time semantic scene completion method with a feature aggregation strategy
and conditioned prediction module. Feature aggregation fuses feature with
different receptive fields and gathers context to improve scene completion
performance. And the conditioned prediction module adopts a two-step prediction
scheme that takes volumetric occupancy as a condition to enhance semantic
completion prediction. We conduct experiments on three recognized benchmarks
NYU, NYUCAD, and SUNCG. Our method achieves competitive performance at a speed
of 110 FPS on one GTX 1080 Ti GPU.Comment: Accepted by ICI
Anisotropic Convolutional Networks for 3D Semantic Scene Completion
As a voxel-wise labeling task, semantic scene completion (SSC) tries to
simultaneously infer the occupancy and semantic labels for a scene from a
single depth and/or RGB image. The key challenge for SSC is how to effectively
take advantage of the 3D context to model various objects or stuffs with severe
variations in shapes, layouts and visibility. To handle such variations, we
propose a novel module called anisotropic convolution, which properties with
flexibility and power impossible for the competing methods such as standard 3D
convolution and some of its variations. In contrast to the standard 3D
convolution that is limited to a fixed 3D receptive field, our module is
capable of modeling the dimensional anisotropy voxel-wisely. The basic idea is
to enable anisotropic 3D receptive field by decomposing a 3D convolution into
three consecutive 1D convolutions, and the kernel size for each such 1D
convolution is adaptively determined on the fly. By stacking multiple such
anisotropic convolution modules, the voxel-wise modeling capability can be
further enhanced while maintaining a controllable amount of model parameters.
Extensive experiments on two SSC benchmarks, NYU-Depth-v2 and NYUCAD, show the
superior performance of the proposed method. Our code is available at
https://waterljwant.github.io/SSC
Deep Learning for 2D and 3D Scene Understanding
This thesis comprises a body of work that investigates the use of deep learning for 2D and 3D scene understanding. Although there has been significant progress made in computer vision using deep learning, a lot of that progress has been relative to performance benchmarks, and for static images; it is common to find that good performance on one benchmark does not necessarily mean good generalization to the kind of viewing conditions that might be encountered by an autonomous robot or agent. In this thesis, we address a variety of problems motivated by the desire to see deep learning algorithms generalize better to robotic vision scenarios. Specifically, we span topics of multi-object detection, unsupervised domain adaptation for semantic segmentation, video object segmentation, and semantic scene completion. First, most modern object detectors use a final post-processing step known as Non-maximum suppression (GreedyNMS). This suffers an inevitable trade-off between precision and recall in crowded scenes. To overcome this limitation, we propose a Pairwise-NMS to cure GreedyNMS. Specifically, a pairwise-relationship network that is based on deep learning is learned to predict if two overlapping proposal boxes contain two objects or zero/one object, which can handle multiple overlapping objects effectively. A common issue in training deep neural networks is the need for large training sets. One approach to this is to use simulated image and video data, but this suffers from a domain gap wherein the performance on real-world data is poor relative to performance on the simulation data. We target a few approaches to addressing so-called domain adaptation for semantic segmentation: (1) Single and multi-exemplars are employed for each class in order to cluster the per-pixel features in the embedding space; (2) Class-balanced self-training strategy is utilized for generating pseudo labels in the target domain; (3) Moreover, a convolutional adaptor is adopted to enforce the features in the source domain and target domain are closed with each other. Next, we tackle the video object segmentation by formulating it as a meta-learning problem, where the base learner aims to learn semantic scene understanding for general objects, and the meta learner quickly adapts the appearance of the target object with a few examples. Our proposed meta-learning method uses a closed-form optimizer, the so-called \ridge regression", which is conducive to fast and better training convergence. One-shot video object segmentation (OSVOS) has the limitation to \overemphasize" the generic semantic object information while \diluting" the instance cues of the object(s), which largely block the whole training process. Through adding a common module, video loss, which we formulate with various forms of constraints (including weighted BCE loss, high-dimensional triplet loss, as well as a novel mixed instance-aware video loss), to train the parent network, the network is then better prepared for the online fine-tuning. Next, we introduce a light-weight Dimensional Decomposition Residual network (DDR) for 3D dense prediction tasks. The novel factorized convolution layer is effective for reducing the network parameters, and the proposed multi-scale fusion mechanism for depth and color image can improve the completion and segmentation accuracy simultaneously. Moreover, we propose PALNet, a novel hybrid network for Semantic Scene Completion(SSC) based on single depth. PALNet utilizes a two-stream network to extract both 2D and 3D features from multi-stages using fine-grained depth information to eficiently capture the context, as well as the geometric cues of the scene. Position Aware Loss (PA-Loss) considers Local Geometric Anisotropy to determine the importance of different positions within the scene. It is beneficial for recovering key details like the boundaries of objects and the corners of the scene. Finally, we propose a 3D gated recurrent fusion network (GRFNet), which learns to adaptively select and fuse the relevant information from depth and RGB by making use of the gate and memory modules. Based on the single-stage fusion, we further propose a multi-stage fusion strategy, which could model the correlations among different stages within the network.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202
Sparsity Invariant CNNs
In this paper, we consider convolutional neural networks operating on sparse
inputs with an application to depth upsampling from sparse laser scan data.
First, we show that traditional convolutional networks perform poorly when
applied to sparse data even when the location of missing data is provided to
the network. To overcome this problem, we propose a simple yet effective sparse
convolution layer which explicitly considers the location of missing data
during the convolution operation. We demonstrate the benefits of the proposed
network architecture in synthetic and real experiments with respect to various
baseline approaches. Compared to dense baselines, the proposed sparse
convolution network generalizes well to novel datasets and is invariant to the
level of sparsity in the data. For our evaluation, we derive a novel dataset
from the KITTI benchmark, comprising 93k depth annotated RGB images. Our
dataset allows for training and evaluating depth upsampling and depth
prediction techniques in challenging real-world settings and will be made
available upon publication
- …