740 research outputs found
Capturing Deformations of Interacting Non-rigid Objects Using RGB-D Data
International audienceThis paper presents a method for tracking multiple interacting deformable objects undergoing rigid motions, elastic deformations and contacts, using image and point cloud data provided by an RGB-D sensor. A joint registration framework is proposed, based on physical Finite Element Method (FEM) elastic and interaction models. It first relies on a visual segmentation of the considered objects in the RGB images. The different segmented point clouds are then processed to estimate rigid transformations with on an ICP algorithm, and to determine geometrical point-to-point correspondences with the meshes. External forces resulting from these correspondences and between the current and the rigidly transformed mesh can then be derived. It provides both non-rigid and rigid data cues. A classical collision detection and response model is also integrated, giving contact forces between the objects. The deformations of the objects are estimated by solving a dynamic system balancing these external and contact forces with the internal or regularization forces computed through the FEM elastic model. This approach has been here tested on different scenarios involving two or three interacting deformable objects of various shapes, with promising results
MonoPerfCap: Human Performance Capture from Monocular Video
We present the first marker-less approach for temporally coherent 3D
performance capture of a human with general clothing from monocular video. Our
approach reconstructs articulated human skeleton motion as well as medium-scale
non-rigid surface deformations in general scenes. Human performance capture is
a challenging problem due to the large range of articulation, potentially fast
motion, and considerable non-rigid deformations, even from multi-view data.
Reconstruction from monocular video alone is drastically more challenging,
since strong occlusions and the inherent depth ambiguity lead to a highly
ill-posed reconstruction problem. We tackle these challenges by a novel
approach that employs sparse 2D and 3D human pose detections from a
convolutional neural network using a batch-based pose estimation strategy.
Joint recovery of per-batch motion allows to resolve the ambiguities of the
monocular reconstruction problem based on a low dimensional trajectory
subspace. In addition, we propose refinement of the surface geometry based on
fully automatically extracted silhouettes to enable medium-scale non-rigid
alignment. We demonstrate state-of-the-art performance capture results that
enable exciting applications such as video editing and free viewpoint video,
previously infeasible from monocular video. Our qualitative and quantitative
evaluation demonstrates that our approach significantly outperforms previous
monocular methods in terms of accuracy, robustness and scene complexity that
can be handled.Comment: Accepted to ACM TOG 2018, to be presented on SIGGRAPH 201
Capturing Hands in Action using Discriminative Salient Points and Physics Simulation
Hand motion capture is a popular research field, recently gaining more
attention due to the ubiquity of RGB-D sensors. However, even most recent
approaches focus on the case of a single isolated hand. In this work, we focus
on hands that interact with other hands or objects and present a framework that
successfully captures motion in such interaction scenarios for both rigid and
articulated objects. Our framework combines a generative model with
discriminatively trained salient points to achieve a low tracking error and
with collision detection and physics simulation to achieve physically plausible
estimates even in case of occlusions and missing visual data. Since all
components are unified in a single objective function which is almost
everywhere differentiable, it can be optimized with standard optimization
techniques. Our approach works for monocular RGB-D sequences as well as setups
with multiple synchronized RGB cameras. For a qualitative and quantitative
evaluation, we captured 29 sequences with a large variety of interactions and
up to 150 degrees of freedom.Comment: Accepted for publication by the International Journal of Computer
Vision (IJCV) on 16.02.2016 (submitted on 17.10.14). A combination into a
single framework of an ECCV'12 multicamera-RGB and a monocular-RGBD GCPR'14
hand tracking paper with several extensions, additional experiments and
detail
Decaf: Monocular Deformation Capture for Face and Hand Interactions
Existing methods for 3D tracking from monocular RGB videos predominantly
consider articulated and rigid objects. Modelling dense non-rigid object
deformations in this setting remained largely unaddressed so far, although such
effects can improve the realism of the downstream applications such as AR/VR
and avatar communications. This is due to the severe ill-posedness of the
monocular view setting and the associated challenges. While it is possible to
naively track multiple non-rigid objects independently using 3D templates or
parametric 3D models, such an approach would suffer from multiple artefacts in
the resulting 3D estimates such as depth ambiguity, unnatural intra-object
collisions and missing or implausible deformations. Hence, this paper
introduces the first method that addresses the fundamental challenges depicted
above and that allows tracking human hands interacting with human faces in 3D
from single monocular RGB videos. We model hands as articulated objects
inducing non-rigid face deformations during an active interaction. Our method
relies on a new hand-face motion and interaction capture dataset with realistic
face deformations acquired with a markerless multi-view camera system. As a
pivotal step in its creation, we process the reconstructed raw 3D shapes with
position-based dynamics and an approach for non-uniform stiffness estimation of
the head tissues, which results in plausible annotations of the surface
deformations, hand-face contact regions and head-hand positions. At the core of
our neural approach are a variational auto-encoder supplying the hand-face
depth prior and modules that guide the 3D tracking by estimating the contacts
and the deformations. Our final 3D hand and face reconstructions are realistic
and more plausible compared to several baselines applicable in our setting,
both quantitatively and qualitatively.
https://vcai.mpi-inf.mpg.de/projects/Deca
LiveCap: Real-time Human Performance Capture from Monocular Video
We present the first real-time human performance capture approach that
reconstructs dense, space-time coherent deforming geometry of entire humans in
general everyday clothing from just a single RGB video. We propose a novel
two-stage analysis-by-synthesis optimization whose formulation and
implementation are designed for high performance. In the first stage, a skinned
template model is jointly fitted to background subtracted input video, 2D and
3D skeleton joint positions found using a deep neural network, and a set of
sparse facial landmark detections. In the second stage, dense non-rigid 3D
deformations of skin and even loose apparel are captured based on a novel
real-time capable algorithm for non-rigid tracking using dense photometric and
silhouette constraints. Our novel energy formulation leverages automatically
identified material regions on the template to model the differing non-rigid
deformation behavior of skin and apparel. The two resulting non-linear
optimization problems per-frame are solved with specially-tailored
data-parallel Gauss-Newton solvers. In order to achieve real-time performance
of over 25Hz, we design a pipelined parallel architecture using the CPU and two
commodity GPUs. Our method is the first real-time monocular approach for
full-body performance capture. Our method yields comparable accuracy with
off-line performance capture techniques, while being orders of magnitude
faster
State of the Art in Dense Monocular Non-Rigid 3D Reconstruction
3D reconstruction of deformable (or non-rigid) scenes from a set of monocular2D image observations is a long-standing and actively researched area ofcomputer vision and graphics. It is an ill-posed inverse problem,since--without additional prior assumptions--it permits infinitely manysolutions leading to accurate projection to the input 2D images. Non-rigidreconstruction is a foundational building block for downstream applicationslike robotics, AR/VR, or visual content creation. The key advantage of usingmonocular cameras is their omnipresence and availability to the end users aswell as their ease of use compared to more sophisticated camera set-ups such asstereo or multi-view systems. This survey focuses on state-of-the-art methodsfor dense non-rigid 3D reconstruction of various deformable objects andcomposite scenes from monocular videos or sets of monocular views. It reviewsthe fundamentals of 3D reconstruction and deformation modeling from 2D imageobservations. We then start from general methods--that handle arbitrary scenesand make only a few prior assumptions--and proceed towards techniques makingstronger assumptions about the observed objects and types of deformations (e.g.human faces, bodies, hands, and animals). A significant part of this STAR isalso devoted to classification and a high-level comparison of the methods, aswell as an overview of the datasets for training and evaluation of thediscussed techniques. We conclude by discussing open challenges in the fieldand the social aspects associated with the usage of the reviewed methods.<br
State of the Art in Dense Monocular Non-Rigid 3D Reconstruction
3D reconstruction of deformable (or non-rigid) scenes from a set of monocular
2D image observations is a long-standing and actively researched area of
computer vision and graphics. It is an ill-posed inverse problem,
since--without additional prior assumptions--it permits infinitely many
solutions leading to accurate projection to the input 2D images. Non-rigid
reconstruction is a foundational building block for downstream applications
like robotics, AR/VR, or visual content creation. The key advantage of using
monocular cameras is their omnipresence and availability to the end users as
well as their ease of use compared to more sophisticated camera set-ups such as
stereo or multi-view systems. This survey focuses on state-of-the-art methods
for dense non-rigid 3D reconstruction of various deformable objects and
composite scenes from monocular videos or sets of monocular views. It reviews
the fundamentals of 3D reconstruction and deformation modeling from 2D image
observations. We then start from general methods--that handle arbitrary scenes
and make only a few prior assumptions--and proceed towards techniques making
stronger assumptions about the observed objects and types of deformations (e.g.
human faces, bodies, hands, and animals). A significant part of this STAR is
also devoted to classification and a high-level comparison of the methods, as
well as an overview of the datasets for training and evaluation of the
discussed techniques. We conclude by discussing open challenges in the field
and the social aspects associated with the usage of the reviewed methods.Comment: 25 page
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects
Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow
- …