2,814 research outputs found
Physics Inspired Optimization on Semantic Transfer Features: An Alternative Method for Room Layout Estimation
In this paper, we propose an alternative method to estimate room layouts of
cluttered indoor scenes. This method enjoys the benefits of two novel
techniques. The first one is semantic transfer (ST), which is: (1) a
formulation to integrate the relationship between scene clutter and room layout
into convolutional neural networks; (2) an architecture that can be end-to-end
trained; (3) a practical strategy to initialize weights for very deep networks
under unbalanced training data distribution. ST allows us to extract highly
robust features under various circumstances, and in order to address the
computation redundance hidden in these features we develop a principled and
efficient inference scheme named physics inspired optimization (PIO). PIO's
basic idea is to formulate some phenomena observed in ST features into
mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the
proposed method is more accurate than state-of-the-art methods.Comment: To appear in CVPR 2017. Project Page:
https://sites.google.com/view/st-pio
Multi-View Deep Learning for Consistent Semantic Mapping with RGB-D Cameras
Visual scene understanding is an important capability that enables robots to
purposefully act in their environment. In this paper, we propose a novel
approach to object-class segmentation from multiple RGB-D views using deep
learning. We train a deep neural network to predict object-class semantics that
is consistent from several view points in a semi-supervised way. At test time,
the semantics predictions of our network can be fused more consistently in
semantic keyframe maps than predictions of a network trained on individual
views. We base our network architecture on a recent single-view deep learning
approach to RGB and depth fusion for semantic object-class segmentation and
enhance it with multi-scale loss minimization. We obtain the camera trajectory
using RGB-D SLAM and warp the predictions of RGB-D images into ground-truth
annotated frames in order to enforce multi-view consistency during training. At
test time, predictions from multiple views are fused into keyframes. We propose
and analyze several methods for enforcing multi-view consistency during
training and testing. We evaluate the benefit of multi-view consistency
training and demonstrate that pooling of deep features and fusion over multiple
views outperforms single-view baselines on the NYUDv2 benchmark for semantic
segmentation. Our end-to-end trained network achieves state-of-the-art
performance on the NYUDv2 dataset in single-view segmentation as well as
multi-view semantic fusion.Comment: the 2017 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2017
Discriminative Training of Deep Fully-connected Continuous CRF with Task-specific Loss
Recent works on deep conditional random fields (CRF) have set new records on
many vision tasks involving structured predictions. Here we propose a
fully-connected deep continuous CRF model for both discrete and continuous
labelling problems. We exemplify the usefulness of the proposed model on
multi-class semantic labelling (discrete) and the robust depth estimation
(continuous) problems.
In our framework, we model both the unary and the pairwise potential
functions as deep convolutional neural networks (CNN), which are jointly
learned in an end-to-end fashion. The proposed method possesses the main
advantage of continuously-valued CRF, which is a closed-form solution for the
Maximum a posteriori (MAP) inference.
To better adapt to different tasks, instead of using the commonly employed
maximum likelihood CRF parameter learning protocol, we propose task-specific
loss functions for learning the CRF parameters.
It enables direct optimization of the quality of the MAP estimates during the
course of learning.
Specifically, we optimize the multi-class classification loss for the
semantic labelling task and the Turkey's biweight loss for the robust depth
estimation problem.
Experimental results on the semantic labelling and robust depth estimation
tasks demonstrate that the proposed method compare favorably against both
baseline and state-of-the-art methods.
In particular, we show that although the proposed deep CRF model is
continuously valued, with the equipment of task-specific loss, it achieves
impressive results even on discrete labelling tasks
PetroSurf3D - A Dataset for high-resolution 3D Surface Segmentation
The development of powerful 3D scanning hardware and reconstruction
algorithms has strongly promoted the generation of 3D surface reconstructions
in different domains. An area of special interest for such 3D reconstructions
is the cultural heritage domain, where surface reconstructions are generated to
digitally preserve historical artifacts. While reconstruction quality nowadays
is sufficient in many cases, the robust analysis (e.g. segmentation, matching,
and classification) of reconstructed 3D data is still an open topic. In this
paper, we target the automatic and interactive segmentation of high-resolution
3D surface reconstructions from the archaeological domain. To foster research
in this field, we introduce a fully annotated and publicly available
large-scale 3D surface dataset including high-resolution meshes, depth maps and
point clouds as a novel benchmark dataset to the community. We provide baseline
results for our existing random forest-based approach and for the first time
investigate segmentation with convolutional neural networks (CNNs) on the data.
Results show that both approaches have complementary strengths and weaknesses
and that the provided dataset represents a challenge for future research.Comment: CBMI Submission; Dataset and more information can be found at
http://lrs.icg.tugraz.at/research/petroglyphsegmentation
- …