564 research outputs found

    Active nonrigid ICP algorithm

    No full text
    © 2015 IEEE.The problem of fitting a 3D facial model to a 3D mesh has received a lot of attention the past 15-20 years. The majority of the techniques fit a general model consisting of a simple parameterisable surface or a mean 3D facial shape. The drawback of this approach is that is rather difficult to describe the non-rigid aspect of the face using just a single facial model. One way to capture the 3D facial deformations is by means of a statistical 3D model of the face or its parts. This is particularly evident when we want to capture the deformations of the mouth region. Even though statistical models of face are generally applied for modelling facial intensity, there are few approaches that fit a statistical model of 3D faces. In this paper, in order to capture and describe the non-rigid nature of facial surfaces we build a part-based statistical model of the 3D facial surface and we combine it with non-rigid iterative closest point algorithms. We show that the proposed algorithm largely outperforms state-of-the-art algorithms for 3D face fitting and alignment especially when it comes to the description of the mouth region

    Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation

    Get PDF
    Accounting for 26% of all new cancer cases worldwide, breast cancer remains the most common form of cancer in women. Although early breast cancer has a favourable long-term prognosis, roughly a third of patients suffer from a suboptimal aesthetic outcome despite breast conserving cancer treatment. Clinical-quality 3D modelling of the breast surface therefore assumes an increasingly important role in advancing treatment planning, prediction and evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive and either infrastructure-heavy or subject to motion artefacts. In this paper we employ a single consumer-grade RGBD camera with an ICP-based registration approach to jointly align all points from a sequence of depth images non-rigidly. Subtle body deformation due to postural sway and respiration is successfully mitigated leading to a higher geometric accuracy through regularised locally affine transformations. We present results from 6 clinical cases where our method compares well with the gold standard and outperforms a previous approach. We show that our method produces better reconstructions qualitatively by visual assessment and quantitatively by consistently obtaining lower landmark error scores and yielding more accurate breast volume estimates

    Compact Model Representation for 3D Reconstruction

    Full text link
    3D reconstruction from 2D images is a central problem in computer vision. Recent works have been focusing on reconstruction directly from a single image. It is well known however that only one image cannot provide enough information for such a reconstruction. A prior knowledge that has been entertained are 3D CAD models due to its online ubiquity. A fundamental question is how to compactly represent millions of CAD models while allowing generalization to new unseen objects with fine-scaled geometry. We introduce an approach to compactly represent a 3D mesh. Our method first selects a 3D model from a graph structure by using a novel free-form deformation FFD 3D-2D registration, and then the selected 3D model is refined to best fit the image silhouette. We perform a comprehensive quantitative and qualitative analysis that demonstrates impressive dense and realistic 3D reconstruction from single images.Comment: 9 pages, 6 figure

    A statistical shape model for deformable surface

    Get PDF
    This short paper presents a deformable surface registration scheme which is based on the statistical shape modelling technique. The method consists of two major processing stages, model building and model fitting. A statistical shape model is first built using a set of training data. Then the model is deformed and matched to the new data by a modified iterative closest point (ICP) registration process. The proposed method is tested on real 3-D facial data from BU-3DFE database. It is shown that proposed method can achieve a reasonable result on surface registration, and can be used for patient position monitoring in radiation therapy and potentially can be used for monitoring of the radiation therapy progress for head and neck patients by analysis of facial articulation

    Symmetric Shape Morphing for 3D Face and Head Modelling

    Get PDF
    We propose a shape template morphing approach suitable for any class of shapes that exhibits approximate reflective symmetry over some plane. The human face and full head are examples. A shape morphing algorithm that constrains all morphs to be symmetric is a form of deformation regulation. This mitigates undesirable effects seen in standard morphing algorithms that are not symmetry-aware, such as tangential sliding. Our method builds on the Coherent Point Drift (CPD) algorithm and is called Symmetry-aware CPD (SA-CPD). Global symmetric deformations are obtained by removal of asymmetric shear from CPD's global affine transformations. Symmetrised local deformations are then used to improve the symmetric template fit. These symmetric deformations are followed by Laplace-Beltrami regularized projection which allows the shape template to fit to any asymmetries in the raw shape data. The pipeline facilitates construction of statistical models that are readily factored into symmetrical and asymmetrical components. Evaluations demonstrate that SA-CPD mitigates tangential sliding problem in CPD and outperforms other competing shape morphing methods, in some cases substantially. 3D morphable models are constructed from over 1200 full head scans, and we evaluate the constructed models in terms of age and gender classification. The best performance, in the context of SVM classification, is achieved using the proposed SA-CPD deformation algorithm

    Modelling of Orthogonal Craniofacial Profiles

    Get PDF
    We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a close-fitting latex cap to reveal the overall skull shape. Texture-based 3D pose normalization and facial landmarking are applied to extract the profiles from 3D raw scans. Fully-automatic profile annotation, subdivision and registration methods are used to establish dense correspondence among sagittal profiles. The collection of sagittal profiles in dense correspondence are scaled and aligned using Generalised Procrustes Analysis (GPA), before applying principal component analysis to generate a morphable model. Additionally, we propose a new alternative alignment called the Ellipse Centre Nasion (ECN) method. Our model is used in a case study of craniosynostosis intervention outcome evaluation, and the evaluation reveals that the proposed model achieves state-of-the-art results. We make publicly available both the morphable models and the profile dataset used to construct it

    A Data-augmented 3D Morphable Model of the Ear

    Get PDF
    Morphable models are useful shape priors for biometric recognition tasks. Here we present an iterative process of refinement for a 3D Morphable Model (3DMM) of the human ear that employs data augmentation. The process employs the following stages 1) landmark-based 3DMM fitting; 2) 3D template deformation to overcome noisy over-fitting; 3) 3D mesh editing, to improve the fit to manual 2D landmarks. These processes are wrapped in an iterative procedure that is able to bootstrap a weak, approximate model into a significantly better model. Evaluations using several performance metrics verify the improvement of our model using the proposed algorithm. We use this new 3DMM model-booting algorithm to generate a refined 3D morphable model of the human ear, and we make this new model and our augmented training dataset public

    HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

    Get PDF
    Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people
    • …
    corecore