16,180 research outputs found

    3-D facial expression representation using statistical shape models

    Get PDF
    This poster describes a methodology for facial expressions representation, using 3-D/4-D data, based on the statistical shape modelling technology. The proposed method uses a shape space vector to model surface deformations, and a modified iterative closest point (ICP) method to calculate the point correspondence between each surface. The shape space vector is constructed using principal component analysis (PCA) computed for typical surfaces represented in a training data set. It is shown that the calculated shape space vector can be used as a significant feature for subsequent facial expression classification. Comprehensive 3-D/4-D face data sets have been used for building the deformation models and for testing, which include 3-D synthetic data generated from FaceGen Modeller® software, 3-D facial expression data caputed by a static 3-D scanner in the BU-3DFE database and 3-D video sequences captured at the ADSIP research centre using a 3dMD® dynamic 3-D scanner

    Three-dimensional morphanalysis of the face.

    Get PDF
    The aim of the work reported in this thesis was to determine the extent to which orthogonal two-dimensional morphanalytic (universally relatable) craniofacial imaging methods can be extended into the realm of computer-based three-dimensional imaging. New methods are presented for capturing universally relatable laser-video surface data, for inter-relating facial surface scans and for constructing probabilistic facial averages. Universally relatable surface scans are captured using the fixed relations principle com- bined with a new laser-video scanner calibration method. Inter- subject comparison of facial surface scans is achieved using inter- active feature labelling and warping methods. These methods have been extended to groups of subjects to allow the construction of three-dimensional probabilistic facial averages. The potential of universally relatable facial surface data for applications such as growth studies and patient assessment is demonstrated. In addition, new methods for scattered data interpolation, for controlling overlap in image warping and a fast, high-resolution method for simulating craniofacial surgery are described. The results demonstrate that it is not only possible to extend universally relatable imaging into three dimensions, but that the extension also enhances the established methods, providing a wide range of new applications

    The virtual human face – superimposing the simultaneously captured 3D photorealistic skin surface of the face on the untextured skin image of the CBCT Scan

    Get PDF
    The aim of this study was to evaluate the impact of simultaneous capture of the three-dimensional (3D) surface of the face and cone beam computed tomography (CBCT) scan of the skull on the accuracy of their registration and superimposition. 3D facial images were acquired in 14 patients using the Di3d (Dimensional Imaging, UK) imaging system and i-CAT CBCT scanner. One stereophotogrammetry image was captured at the same time as the CBCT and another one hour later. The two stereophotographs were then individually superimposed over the CBCT using VRmesh. Seven patches were isolated on the final merged surfaces. For the whole face and each individual patch; maximum and minimum range of deviation between surfaces, absolute average distance between surfaces, and standard deviation for the 90th percentile of the distance errors were calculated. The superimposition errors of the whole face for both captures revealed statistically significant differences (P=0.00081). The absolute average distances in both separate and simultaneous captures were 0.47mm and 0.27mm, respectively. The level of superimposition accuracy in patches from separate captures ranged between 0.3 and 0.9mm, while that of simultaneous captures was 0.4mm. Simultaneous capture of Di3d and CBCT images significantly improved the accuracy of superimposition of these image modalities

    3D body scanning and healthcare applications

    Get PDF
    Developed largely for the clothing industry, 3D body-surface scanners are transforming our ability to accurately measure and visualize a person's body size, shape, and skin-surface area. Advancements in 3D whole-body scanning seem to offer even greater potential for healthcare applications

    Low dimensional Surface Parameterisation with application in biometrics

    Get PDF
    This paper describes initial results from a novel low dimensional surface parameterisation approach based on a modified iterative closest point (ICP) registration process which uses vertex based principal component analysis (PCA) to incorporate a deformable element into registration process. Using this method a 3D surface is represented by a shape space vector of much smaller dimensionality than the dimensionality of the original data space vector. The proposed method is tested on both simulated 3D faces with different facial expressions and real face data. It is shown that the proposed surface representation can be potentially used as feature space for a facial expression recognition system

    Multi-view passive 3D face acquisition device

    Get PDF
    Approaches to acquisition of 3D facial data include laser scanners, structured light devices and (passive) stereo vision. The laser scanner and structured light methods allow accurate reconstruction of the 3D surface but strong light is projected on the faces of subjects. Passive stereo vision based approaches do not require strong light to be projected, however, it is hard to obtain comparable accuracy and robustness of the surface reconstruction. In this paper a passive multiple view approach using 5 cameras in a ’+’ configuration is proposed that significantly increases robustness and accuracy relative to traditional stereo vision approaches. The normalised cross correlations of all 5 views are combined using direct projection of points instead of the traditionally used rectified images. Also, errors caused by different perspective deformation of the surface in the different views are reduced by using an iterative reconstruction technique where the depth estimation of the previous iteration is used to warp the windows of the normalised cross correlation for the different views

    Reconstruction of 3D human facial images using partial differential equations.

    Get PDF
    One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes

    Evaluation of Die Trim Morphology Made by CAD-CAM Technology

    Get PDF
    Statement of problem The die contour can affect the emergence profile of prosthetic restorations. However, little information is available regarding the congruency between a stereolithographic (SLA) die and its corresponding natural tooth. Purpose The purpose of this vitro study was to evaluate the shapes of SLA die in comparison with the subgingival contour of a prepared tooth to be restored with a ceramic crown. Material and methods Twenty extracted human teeth, 10 incisors, and 10 molars, were disinfected and mounted in a typodont model. The teeth were prepared for a ceramic restoration. Definitive impressions were made using an intraoral scanner from which 20 SLA casts with removable dies were fabricated. The removable dies and corresponding human teeth were digitized using a 3-dimensional desktop scanner and evaluated with computer-aided design software. The subgingival morphology with regard to angle, length, and volume at the buccolingual and mesiodistal surfaces and at zones A, B, C, and D were compared. Data were first analyzed with repeated measures analysis of variance (ANOVA), using locations (buccolingual and mesiodistal), zones (A, B, C, and D), and model type (SLA and Natural) as within-subject factors and tooth type (molar and incisor) as the between-subject factor. Post hoc analyses were performed to investigate the difference between natural teeth and corresponding SLA models, depending upon the interaction effect from the repeated measures ANOVA (α=.05). Results For angle analysis, the incisor group demonstrated a significant difference between the natural tooth and SLA die on the buccolingual surfaces (PPPPPP Conclusions For the comparison of angles, SLA dies did not replicate the subgingival contour of natural teeth on the buccolingual surfaces of the incisal groups. For the comparison of length and volume, SLA dies were more concave and did not replicate the subgingival contour of natural teeth in the incisal and molar groups

    Towards a comprehensive 3D dynamic facial expression database

    Get PDF
    Human faces play an important role in everyday life, including the expression of person identity, emotion and intentionality, along with a range of biological functions. The human face has also become the subject of considerable research effort, and there has been a shift towards understanding it using stimuli of increasingly more realistic formats. In the current work, we outline progress made in the production of a database of facial expressions in arguably the most realistic format, 3D dynamic. A suitable architecture for capturing such 3D dynamic image sequences is described and then used to record seven expressions (fear, disgust, anger, happiness, surprise, sadness and pain) by 10 actors at 3 levels of intensity (mild, normal and extreme). We also present details of a psychological experiment that was used to formally evaluate the accuracy of the expressions in a 2D dynamic format. The result is an initial, validated database for researchers and practitioners. The goal is to scale up the work with more actors and expression types
    corecore