3,362 research outputs found
SEGCloud: Semantic Segmentation of 3D Point Clouds
3D semantic scene labeling is fundamental to agents operating in the real
world. In particular, labeling raw 3D point sets from sensors provides
fine-grained semantics. Recent works leverage the capabilities of Neural
Networks (NNs), but are limited to coarse voxel predictions and do not
explicitly enforce global consistency. We present SEGCloud, an end-to-end
framework to obtain 3D point-level segmentation that combines the advantages of
NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields
(FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are
transferred back to the raw 3D points via trilinear interpolation. Then the
FC-CRF enforces global consistency and provides fine-grained semantics on the
points. We implement the latter as a differentiable Recurrent NN to allow joint
optimization. We evaluate the framework on two indoor and two outdoor 3D
datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance
comparable or superior to the state-of-the-art on all datasets.Comment: Accepted as a spotlight at the International Conference of 3D Vision
(3DV 2017
Learning Structured Inference Neural Networks with Label Relations
Images of scenes have various objects as well as abundant attributes, and
diverse levels of visual categorization are possible. A natural image could be
assigned with fine-grained labels that describe major components,
coarse-grained labels that depict high level abstraction or a set of labels
that reveal attributes. Such categorization at different concept layers can be
modeled with label graphs encoding label information. In this paper, we exploit
this rich information with a state-of-art deep learning framework, and propose
a generic structured model that leverages diverse label relations to improve
image classification performance. Our approach employs a novel stacked label
prediction neural network, capturing both inter-level and intra-level label
semantics. We evaluate our method on benchmark image datasets, and empirical
results illustrate the efficacy of our model.Comment: Conference on Computer Vision and Pattern Recognition(CVPR) 201
- …