2,733 research outputs found
Learning to Reconstruct Texture-less Deformable Surfaces from a Single View
Recent years have seen the development of mature solutions for reconstructing
deformable surfaces from a single image, provided that they are relatively
well-textured. By contrast, recovering the 3D shape of texture-less surfaces
remains an open problem, and essentially relates to Shape-from-Shading. In this
paper, we introduce a data-driven approach to this problem. We introduce a
general framework that can predict diverse 3D representations, such as meshes,
normals, and depth maps. Our experiments show that meshes are ill-suited to
handle texture-less 3D reconstruction in our context. Furthermore, we
demonstrate that our approach generalizes well to unseen objects, and that it
yields higher-quality reconstructions than a state-of-the-art SfS technique,
particularly in terms of normal estimates. Our reconstructions accurately model
the fine details of the surfaces, such as the creases of a T-Shirt worn by a
person.Comment: Accepted to 3DV 201
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
LiveCap: Real-time Human Performance Capture from Monocular Video
We present the first real-time human performance capture approach that
reconstructs dense, space-time coherent deforming geometry of entire humans in
general everyday clothing from just a single RGB video. We propose a novel
two-stage analysis-by-synthesis optimization whose formulation and
implementation are designed for high performance. In the first stage, a skinned
template model is jointly fitted to background subtracted input video, 2D and
3D skeleton joint positions found using a deep neural network, and a set of
sparse facial landmark detections. In the second stage, dense non-rigid 3D
deformations of skin and even loose apparel are captured based on a novel
real-time capable algorithm for non-rigid tracking using dense photometric and
silhouette constraints. Our novel energy formulation leverages automatically
identified material regions on the template to model the differing non-rigid
deformation behavior of skin and apparel. The two resulting non-linear
optimization problems per-frame are solved with specially-tailored
data-parallel Gauss-Newton solvers. In order to achieve real-time performance
of over 25Hz, we design a pipelined parallel architecture using the CPU and two
commodity GPUs. Our method is the first real-time monocular approach for
full-body performance capture. Our method yields comparable accuracy with
off-line performance capture techniques, while being orders of magnitude
faster
Three-dimensional CFD simulations with large displacement of the geometries using a connectivity-change moving mesh approach
This paper deals with three-dimensional (3D) numerical simulations involving 3D moving geometries with large displacements on unstructured meshes. Such simulations are of great value to industry, but remain very time-consuming. A robust moving mesh algorithm coupling an elasticity-like mesh deformation solution and mesh optimizations was proposed in previous works, which removes the need for global remeshing when performing large displacements. The optimizations, and in particular generalized edge/face swapping, preserve the initial quality of the mesh throughout the simulation. We propose to integrate an Arbitrary Lagrangian Eulerian compressible flow solver into this process to demonstrate its capabilities in a full CFD computation context. This solver relies on a local enforcement of the discrete geometric conservation law to preserve the order of accuracy of the time integration. The displacement of the geometries is either imposed, or driven by fluid–structure interaction (FSI). In the latter case, the six degrees of freedom approach for rigid bodies is considered. Finally, several 3D imposed-motion and FSI examples are given to validate the proposed approach, both in academic and industrial configurations
Estimation of Human Body Shape and Posture Under Clothing
Estimating the body shape and posture of a dressed human subject in motion
represented as a sequence of (possibly incomplete) 3D meshes is important for
virtual change rooms and security. To solve this problem, statistical shape
spaces encoding human body shape and posture variations are commonly used to
constrain the search space for the shape estimate. In this work, we propose a
novel method that uses a posture-invariant shape space to model body shape
variation combined with a skeleton-based deformation to model posture
variation. Our method can estimate the body shape and posture of both static
scans and motion sequences of dressed human body scans. In case of motion
sequences, our method takes advantage of motion cues to solve for a single body
shape estimate along with a sequence of posture estimates. We apply our
approach to both static scans and motion sequences and demonstrate that using
our method, higher fitting accuracy is achieved than when using a variant of
the popular SCAPE model as statistical model.Comment: 23 pages, 11 figure
Finite Element Based Tracking of Deforming Surfaces
We present an approach to robustly track the geometry of an object that
deforms over time from a set of input point clouds captured from a single
viewpoint. The deformations we consider are caused by applying forces to known
locations on the object's surface. Our method combines the use of prior
information on the geometry of the object modeled by a smooth template and the
use of a linear finite element method to predict the deformation. This allows
the accurate reconstruction of both the observed and the unobserved sides of
the object. We present tracking results for noisy low-quality point clouds
acquired by either a stereo camera or a depth camera, and simulations with
point clouds corrupted by different error terms. We show that our method is
also applicable to large non-linear deformations.Comment: additional experiment
Error estimation and adaptivity for incompressible, non–linear (hyper–)elasticity
A Galerkin finite element method is developed for non–linear, incompressible (hyper) elasticity, and a posteriori error estimates are derived for both linear functionals of the solution and linear functionals of the stress on a boundary where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution demonstrates the accuracy of the error estimators. Finally the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity
- …