277 research outputs found
Automatic 3D facial model and texture reconstruction from range scans
This paper presents a fully automatic approach to fitting a generic facial model to detailed range scans of human faces to reconstruct 3D facial models and textures with no manual intervention (such as specifying landmarks). A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registrations between the generic model and the range scans with different sizes. And then a new template-fitting method, formulated in an optmization framework of minimizing the physically based elastic energy derived from thin shells, faithfully reconstructs the surfaces and the textures from the range scans and yields dense point correspondences across the reconstructed facial models. Finally, we demonstrate a facial expression transfer method to clone facial expressions from the generic model onto the reconstructed facial models by using the deformation transfer technique
Self-correction of 3D reconstruction from multi-view stereo images
We present a self-correction approach to improving the
3D reconstruction of a multi-view 3D photogrammetry system.
The self-correction approach has been able to repair
the reconstructed 3D surface damaged by depth discontinuities.
Due to self-occlusion, multi-view range images
have to be acquired and integrated into a watertight nonredundant
mesh model in order to cover the extended surface
of an imaged object. The integrated surface often suffers
from “dent” artifacts produced by depth discontinuities
in the multi-view range images. In this paper we propose
a novel approach to correcting the 3D integrated surface
such that the dent artifacts can be repaired automatically.
We show examples of 3D reconstruction to demonstrate the
improvement that can be achieved by the self-correction
approach. This self-correction approach can be extended
to integrate range images obtained from alternative range
capture devices
Recommended from our members
3D data modelling and processing using partial differential equations.
NoIn this paper we discuss techniques for 3D
data modelling and processing where the data are
usually provided as point clouds which arise from 3D
scanning devices. The particular approaches we adopt
in modelling 3D data involves the use of Partial
Differential Equations (PDEs). In particular we show
how the continuous and discrete versions of elliptic
PDEs can be used for data modelling. We show that
using PDEs it is intuitively possible to model data
corresponding to complex scenes. Furthermore, we
show that data can be stored in compact format in the
form of PDE boundary conditions. In order to
demonstrate the methodology we utlise several examples
of practical nature
Reconstrucción tridimensional de rostros a partir de imágenes de rango por medio de funciones de base radial de soporte compacto tri-dimensional
En este trabajo se muestra la utilización de funciones de base radial de soporte compacto para la reconstrucción tridimensional de rostros. En trabajos anteriores se habían explorado diferentes técnicas y diferentes funciones de base radial para reconstrucción de superficies; ahora presentamos los algoritmos y los resultados de la utilización de funciones de base radial de soporte compacto las cuales presentan ventajas comparativas en términos del tiempo de construcción de un interpolante para la reconstrucción. Se presentan comparaciones con técnicas ampliamente utilizadas en este campo y se detalla el proceso global de reconstrucción de superficies
Saliency-guided integration of multiple scans
we present a novel method..
Creating Simplified 3D Models with High Quality Textures
This paper presents an extension to the KinectFusion algorithm which allows
creating simplified 3D models with high quality RGB textures. This is achieved
through (i) creating model textures using images from an HD RGB camera that is
calibrated with Kinect depth camera, (ii) using a modified scheme to update
model textures in an asymmetrical colour volume that contains a higher number
of voxels than that of the geometry volume, (iii) simplifying dense polygon
mesh model using quadric-based mesh decimation algorithm, and (iv) creating and
mapping 2D textures to every polygon in the output 3D model. The proposed
method is implemented in real-time by means of GPU parallel processing.
Visualization via ray casting of both geometry and colour volumes provides
users with a real-time feedback of the currently scanned 3D model. Experimental
results show that the proposed method is capable of keeping the model texture
quality even for a heavily decimated model and that, when reconstructing small
objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques
and Applications (DICTA), Page 1 -
- …