1,685 research outputs found
General Dynamic Scene Reconstruction from Multiple View Video
This paper introduces a general approach to dynamic scene reconstruction from
multiple moving cameras without prior knowledge or limiting constraints on the
scene structure, appearance, or illumination. Existing techniques for dynamic
scene reconstruction from multiple wide-baseline camera views primarily focus
on accurate reconstruction in controlled environments, where the cameras are
fixed and calibrated and background is known. These approaches are not robust
for general dynamic scenes captured with sparse moving cameras. Previous
approaches for outdoor dynamic scene reconstruction assume prior knowledge of
the static background appearance and structure. The primary contributions of
this paper are twofold: an automatic method for initial coarse dynamic scene
segmentation and reconstruction without prior knowledge of background
appearance or structure; and a general robust approach for joint segmentation
refinement and dense reconstruction of dynamic scenes from multiple
wide-baseline static or moving cameras. Evaluation is performed on a variety of
indoor and outdoor scenes with cluttered backgrounds and multiple dynamic
non-rigid objects such as people. Comparison with state-of-the-art approaches
demonstrates improved accuracy in both multiple view segmentation and dense
reconstruction. The proposed approach also eliminates the requirement for prior
knowledge of scene structure and appearance
Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image
We consider the problem of dense depth prediction from a sparse set of depth
measurements and a single RGB image. Since depth estimation from monocular
images alone is inherently ambiguous and unreliable, to attain a higher level
of robustness and accuracy, we introduce additional sparse depth samples, which
are either acquired with a low-resolution depth sensor or computed via visual
Simultaneous Localization and Mapping (SLAM) algorithms. We propose the use of
a single deep regression network to learn directly from the RGB-D raw data, and
explore the impact of number of depth samples on prediction accuracy. Our
experiments show that, compared to using only RGB images, the addition of 100
spatially random depth samples reduces the prediction root-mean-square error by
50% on the NYU-Depth-v2 indoor dataset. It also boosts the percentage of
reliable prediction from 59% to 92% on the KITTI dataset. We demonstrate two
applications of the proposed algorithm: a plug-in module in SLAM to convert
sparse maps to dense maps, and super-resolution for LiDARs. Software and video
demonstration are publicly available.Comment: accepted to ICRA 2018. 8 pages, 8 figures, 3 tables. Video at
https://www.youtube.com/watch?v=vNIIT_M7x7Y. Code at
https://github.com/fangchangma/sparse-to-dens
Temporally coherent 4D reconstruction of complex dynamic scenes
This paper presents an approach for reconstruction of 4D temporally coherent
models of complex dynamic scenes. No prior knowledge is required of scene
structure or camera calibration allowing reconstruction from multiple moving
cameras. Sparse-to-dense temporal correspondence is integrated with joint
multi-view segmentation and reconstruction to obtain a complete 4D
representation of static and dynamic objects. Temporal coherence is exploited
to overcome visual ambiguities resulting in improved reconstruction of complex
scenes. Robust joint segmentation and reconstruction of dynamic objects is
achieved by introducing a geodesic star convexity constraint. Comparative
evaluation is performed on a variety of unstructured indoor and outdoor dynamic
scenes with hand-held cameras and multiple people. This demonstrates
reconstruction of complete temporally coherent 4D scene models with improved
nonrigid object segmentation and shape reconstruction.Comment: To appear in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 . Video available at:
https://www.youtube.com/watch?v=bm_P13_-Ds
Discovering useful parts for pose estimation in sparsely annotated datasets
Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.https://arxiv.org/abs/1605.00707Accepted manuscrip
Temporally Coherent General Dynamic Scene Reconstruction
Existing techniques for dynamic scene reconstruction from multiple
wide-baseline cameras primarily focus on reconstruction in controlled
environments, with fixed calibrated cameras and strong prior constraints. This
paper introduces a general approach to obtain a 4D representation of complex
dynamic scenes from multi-view wide-baseline static or moving cameras without
prior knowledge of the scene structure, appearance, or illumination.
Contributions of the work are: An automatic method for initial coarse
reconstruction to initialize joint estimation; Sparse-to-dense temporal
correspondence integrated with joint multi-view segmentation and reconstruction
to introduce temporal coherence; and a general robust approach for joint
segmentation refinement and dense reconstruction of dynamic scenes by
introducing shape constraint. Comparison with state-of-the-art approaches on a
variety of complex indoor and outdoor scenes, demonstrates improved accuracy in
both multi-view segmentation and dense reconstruction. This paper demonstrates
unsupervised reconstruction of complete temporally coherent 4D scene models
with improved non-rigid object segmentation and shape reconstruction and its
application to free-viewpoint rendering and virtual reality.Comment: Submitted to IJCV 2019. arXiv admin note: substantial text overlap
with arXiv:1603.0338
Weakly supervised 3D Reconstruction with Adversarial Constraint
Supervised 3D reconstruction has witnessed a significant progress through the
use of deep neural networks. However, this increase in performance requires
large scale annotations of 2D/3D data. In this paper, we explore inexpensive 2D
supervision as an alternative for expensive 3D CAD annotation. Specifically, we
use foreground masks as weak supervision through a raytrace pooling layer that
enables perspective projection and backpropagation. Additionally, since the 3D
reconstruction from masks is an ill posed problem, we propose to constrain the
3D reconstruction to the manifold of unlabeled realistic 3D shapes that match
mask observations. We demonstrate that learning a log-barrier solution to this
constrained optimization problem resembles the GAN objective, enabling the use
of existing tools for training GANs. We evaluate and analyze the manifold
constrained reconstruction on various datasets for single and multi-view
reconstruction of both synthetic and real images
SEVEN: Deep Semi-supervised Verification Networks
Verification determines whether two samples belong to the same class or not,
and has important applications such as face and fingerprint verification, where
thousands or millions of categories are present but each category has scarce
labeled examples, presenting two major challenges for existing deep learning
models. We propose a deep semi-supervised model named SEmi-supervised
VErification Network (SEVEN) to address these challenges. The model consists of
two complementary components. The generative component addresses the lack of
supervision within each category by learning general salient structures from a
large amount of data across categories. The discriminative component exploits
the learned general features to mitigate the lack of supervision within
categories, and also directs the generative component to find more informative
structures of the whole data manifold. The two components are tied together in
SEVEN to allow an end-to-end training of the two components. Extensive
experiments on four verification tasks demonstrate that SEVEN significantly
outperforms other state-of-the-art deep semi-supervised techniques when labeled
data are in short supply. Furthermore, SEVEN is competitive with fully
supervised baselines trained with a larger amount of labeled data. It indicates
the importance of the generative component in SEVEN.Comment: 7 pages, 2 figures, accepted to the 2017 International Joint
Conference on Artificial Intelligence (IJCAI-17
- …