15,890 research outputs found
SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction
We contribute a dense SLAM system that takes a live stream of depth images as
input and reconstructs non-rigid deforming scenes in real time, without
templates or prior models. In contrast to existing approaches, we do not
maintain any volumetric data structures, such as truncated signed distance
function (TSDF) fields or deformation fields, which are performance and memory
intensive. Our system works with a flat point (surfel) based representation of
geometry, which can be directly acquired from commodity depth sensors. Standard
graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for
all central operations: i.e., nearest neighbor maintenance, non-rigid
deformation field estimation and fusion of depth measurements. Our pipeline
inherently avoids expensive volumetric operations such as marching cubes,
volumetric fusion and dense deformation field update, leading to significantly
improved performance. Furthermore, the explicit and flexible surfel based
geometry representation enables efficient tackling of topology changes and
tracking failures, which makes our reconstructions consistent with updated
depth observations. Our system allows robots to maintain a scene description
with non-rigidly deformed objects that potentially enables interactions with
dynamic working environments.Comment: RSS 2018. The video and source code are available on
https://sites.google.com/view/surfelwarp/hom
MonoPerfCap: Human Performance Capture from Monocular Video
We present the first marker-less approach for temporally coherent 3D
performance capture of a human with general clothing from monocular video. Our
approach reconstructs articulated human skeleton motion as well as medium-scale
non-rigid surface deformations in general scenes. Human performance capture is
a challenging problem due to the large range of articulation, potentially fast
motion, and considerable non-rigid deformations, even from multi-view data.
Reconstruction from monocular video alone is drastically more challenging,
since strong occlusions and the inherent depth ambiguity lead to a highly
ill-posed reconstruction problem. We tackle these challenges by a novel
approach that employs sparse 2D and 3D human pose detections from a
convolutional neural network using a batch-based pose estimation strategy.
Joint recovery of per-batch motion allows to resolve the ambiguities of the
monocular reconstruction problem based on a low dimensional trajectory
subspace. In addition, we propose refinement of the surface geometry based on
fully automatically extracted silhouettes to enable medium-scale non-rigid
alignment. We demonstrate state-of-the-art performance capture results that
enable exciting applications such as video editing and free viewpoint video,
previously infeasible from monocular video. Our qualitative and quantitative
evaluation demonstrates that our approach significantly outperforms previous
monocular methods in terms of accuracy, robustness and scene complexity that
can be handled.Comment: Accepted to ACM TOG 2018, to be presented on SIGGRAPH 201
Separating Reflection and Transmission Images in the Wild
The reflections caused by common semi-reflectors, such as glass windows, can
impact the performance of computer vision algorithms. State-of-the-art methods
can remove reflections on synthetic data and in controlled scenarios. However,
they are based on strong assumptions and do not generalize well to real-world
images. Contrary to a common misconception, real-world images are challenging
even when polarization information is used. We present a deep learning approach
to separate the reflected and the transmitted components of the recorded
irradiance, which explicitly uses the polarization properties of light. To
train it, we introduce an accurate synthetic data generation pipeline, which
simulates realistic reflections, including those generated by curved and
non-ideal surfaces, non-static scenes, and high-dynamic-range scenes.Comment: accepted at ECCV 201
DTF-Net: Category-Level Pose Estimation and Shape Reconstruction via Deformable Template Field
Estimating 6D poses and reconstructing 3D shapes of objects in open-world
scenes from RGB-depth image pairs is challenging. Many existing methods rely on
learning geometric features that correspond to specific templates while
disregarding shape variations and pose differences among objects in the same
category. As a result, these methods underperform when handling unseen object
instances in complex environments. In contrast, other approaches aim to achieve
category-level estimation and reconstruction by leveraging normalized geometric
structure priors, but the static prior-based reconstruction struggles with
substantial intra-class variations. To solve these problems, we propose the
DTF-Net, a novel framework for pose estimation and shape reconstruction based
on implicit neural fields of object categories. In DTF-Net, we design a
deformable template field to represent the general category-wise shape latent
features and intra-category geometric deformation features. The field
establishes continuous shape correspondences, deforming the category template
into arbitrary observed instances to accomplish shape reconstruction. We
introduce a pose regression module that shares the deformation features and
template codes from the fields to estimate the accurate 6D pose of each object
in the scene. We integrate a multi-modal representation extraction module to
extract object features and semantic masks, enabling end-to-end inference.
Moreover, during training, we implement a shape-invariant training strategy and
a viewpoint sampling method to further enhance the model's capability to
extract object pose features. Extensive experiments on the REAL275 and CAMERA25
datasets demonstrate the superiority of DTF-Net in both synthetic and real
scenes. Furthermore, we show that DTF-Net effectively supports grasping tasks
with a real robot arm.Comment: The first two authors are with equal contributions. Paper accepted by
ACM MM 202
Non-rigid Reconstruction with a Single Moving RGB-D Camera
We present a novel non-rigid reconstruction method using a moving RGB-D
camera. Current approaches use only non-rigid part of the scene and completely
ignore the rigid background. Non-rigid parts often lack sufficient geometric
and photometric information for tracking large frame-to-frame motion. Our
approach uses camera pose estimated from the rigid background for foreground
tracking. This enables robust foreground tracking in situations where large
frame-to-frame motion occurs. Moreover, we are proposing a multi-scale
deformation graph which improves non-rigid tracking without compromising the
quality of the reconstruction. We are also contributing a synthetic dataset
which is made publically available for evaluating non-rigid reconstruction
methods. The dataset provides frame-by-frame ground truth geometry of the
scene, the camera trajectory, and masks for background foreground. Experimental
results show that our approach is more robust in handling larger frame-to-frame
motions and provides better reconstruction compared to state-of-the-art
approaches.Comment: Accepted in International Conference on Pattern Recognition (ICPR
2018
- …