117 research outputs found

    Reconstruction of three-dimensional facial geometric features related to fetal alcohol syndrome using adult surrogates

    Get PDF
    Fetal alcohol syndrome (FAS) is a condition caused by prenatal alcohol exposure. The diagnosis of FAS is based on the presence of central nervous system impairments, evidence of growth abnormalities and abnormal facial features. Direct anthropometry has traditionally been used to obtain facial data to assess the FAS facial features. Research efforts have focused on indirect anthropometry such as 3D surface imaging systems to collect facial data for facial analysis. However, 3D surface imaging systems are costly. As an alternative, approaches for 3D reconstruction from a single 2D image of the face using a 3D morphable model (3DMM) were explored in this research study. The research project was accomplished in several steps. 3D facial data were obtained from the publicly available BU-3DFE database, developed by the State University of New York. The 3D face scans in the training set were landmarked by different observers. The reliability and precision in selecting 3D landmarks were evaluated. The intraclass correlation coefficients for intra- and inter-observer reliability were greater than 0.95. The average intra-observer error was 0.26 mm and the average inter-observer error was 0.89 mm. A rigid registration was performed on the 3D face scans in the training set. Following rigid registration, a dense point-to-point correspondence across a set of aligned face scans was computed using the Gaussian process model fitting approach. A 3DMM of the face was constructed from the fully registered 3D face scans. The constructed 3DMM of the face was evaluated based on generalization, specificity, and compactness. The quantitative evaluations show that the constructed 3DMM achieves reliable results. 3D face reconstructions from single 2D images were estimated based on the 3DMM. The MetropolisHastings algorithm was used to fit the 3DMM features to 2D image features to generate the 3D face reconstruction. Finally, the geometric accuracy of the reconstructed 3D faces was evaluated based on ground-truth 3D face scans. The average root mean square error for the surface-to-surface comparisons between the reconstructed faces and the ground-truth face scans was 2.99 mm. In conclusion, a framework to estimate 3D face reconstructions from single 2D facial images was developed and the reconstruction errors were evaluated. The geometric accuracy of the 3D face reconstructions was comparable to that found in the literature. However, future work should consider minimizing reconstruction errors to acceptable clinical standards in order for the framework to be useful for 3D-from-2D reconstruction in general, and also for developing FAS applications. Finally, future work should consider estimating a 3D face using multi-view 2D images to increase the information available for 3D-from-2D reconstruction

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems

    Spatially dense 3D facial heritability and modules of co-heritability in a father-offspring design

    Get PDF
    Introduction: The human face is a complex trait displaying a strong genetic component as illustrated by various studies on facial heritability. Most of these start from sparse descriptions of facial shape using a limited set of landmarks. Subsequently, facial features are preselected as univariate measurements or principal components and the heritability is estimated for each of these features separately. However, none of these studies investigated multivariate facial features, nor the co-heritability between different facial features. Here we report a spatially dense multivariate analysis of facial heritability and co-heritability starting from data from fathers and their children available within ALSPAC. Additionally, we provide an elaborate overview of related craniofacial heritability studies. Methods: In total, 3D facial images of 762 father-offspring pairs were retained after quality control. An anthropometric mask was applied to these images to establish spatially dense quasi-landmark configurations. Partial least squares regression was performed and the (co-)heritability for all quasi-landmarks (∌7160) was computed as twice the regression coefficient. Subsequently, these were used as input to a hierarchical facial segmentation, resulting in the definition of facial modules that are internally integrated through the biological mechanisms of inheritance. Finally, multivariate heritability estimates were obtained for each of the resulting modules. Results: Nearly all modular estimates reached statistical significance under 1,000,000 permutations and after multiple testing correction (p ≀ 1.3889 × 10-3), displaying low to high heritability scores. Particular facial areas showing the greatest heritability were similar for both sons and daughters. However, higher estimates were obtained in the former. These areas included the global face, upper facial part (encompassing the nasion, zygomas and forehead) and nose, with values reaching 82% in boys and 72% in girls. The lower parts of the face only showed low to moderate levels of heritability. Conclusion: In this work, we refrain from reducing facial variation to a series of individual measurements and analyze the heritability and co-heritability from spatially dense landmark configurations at multiple levels of organization. Finally, a multivariate estimation of heritability for global-to-local facial segments is reported. Knowledge of the genetic determination of facial shape is useful in the identification of genetic variants that underlie normal-range facial variation

    4D (3D Dynamic) statistical models of conversational expressions and the synthesis of highly-realistic 4D facial expression sequences

    Get PDF
    In this thesis, a novel approach for modelling 4D (3D Dynamic) conversational interactions and synthesising highly-realistic expression sequences is described. To achieve these goals, a fully-automatic, fast, and robust pre-processing pipeline was developed, along with an approach for tracking and inter-subject registering 3D sequences (4D data). A method for modelling and representing sequences as single entities is also introduced. These sequences can be manipulated and used for synthesising new expression sequences. Classification experiments and perceptual studies were performed to validate the methods and models developed in this work. To achieve the goals described above, a 4D database of natural, synced, dyadic conversations was captured. This database is the first of its kind in the world. Another contribution of this thesis is the development of a novel method for modelling conversational interactions. Our approach takes into account the time-sequential nature of the interactions, and encompasses the characteristics of each expression in an interaction, as well as information about the interaction itself. Classification experiments were performed to evaluate the quality of our tracking, inter-subject registration, and modelling methods. To evaluate our ability to model, manipulate, and synthesise new expression sequences, we conducted perceptual experiments. For these perceptual studies, we manipulated modelled sequences by modifying their amplitudes, and had human observers evaluate the level of expression realism and image quality. To evaluate our coupled modelling approach for conversational facial expression interactions, we performed a classification experiment that differentiated predicted frontchannel and backchannel sequences, using the original sequences in the training set. We also used the predicted backchannel sequences in a perceptual study in which human observers rated the level of similarity of the predicted and original sequences. The results of these experiments help support our methods and our claim of our ability to produce 4D, highly-realistic expression sequences that compete with state-of-the-art methods

    ULTRA CLOSE-RANGE DIGITAL PHOTOGRAMMETRY AS A TOOL TO PRESERVE, STUDY, AND SHARE SKELETAL REMAINS

    Get PDF
    Skeletal collections around the world hold valuable and intriguing knowledge about humanity. Their potential value could be fully exploited by overcoming current limitations in documenting and sharing them. Virtual anthropology provides effective ways to study and value skeletal collections using three-dimensional (3D) data, e.g. allowing powerful comparative and evolutionary studies, along with specimen preservation and dissemination. CT- and laser scanning are the most used techniques for three-dimensional reconstruction. However, they are resource-intensive and, therefore, difficult to be applied to large samples or skeletal collections. Ultra close-range digital photogrammetry (UCR-DP) enables photorealistic 3D reconstructions from simple photographs of the specimen. However, it is the least used method in skeletal anthropology and the lack of appropriate protocols often limit the quality of its outcomes. This Ph.D. thesis explored UCR-DP application in skeletal anthropology. The state-of-the-art of this technique was studied, and a new approach based on cloud computing was proposed and validated against current gold standards. This approach relies on the processing capabilities of remote servers and a free-for-academic use software environment; it proved to produce measurements equivalent to those of osteometry and, in many cases, they were more precise than those of CT-scanning. Cloud-based UCR-DP allowed the processing of multiple 3D models at once, leading to a low-cost, quick, and effective 3D production. The technique was successfully used to digitally preserve an initial sample of 534 crania from the skeletal collections of the Museo Sardo di Antropologia ed Etnografia (MuSAE, UniversitĂ  degli Studi di Cagliari). Best practices in using the technique for skeletal collection dissemination were studied and several applications were developed including MuSAE online virtual tours, virtual physical anthropology labs and distance learning, durable online dissemination, and values-led participatorily designed interactive and immersive exhibitions at the MuSAE. The sample will be used in a future population study of Sardinian skeletal characteristics from the Neolithic to modern times. In conclusion, cloud-based UCR-DP offers many significant advantages over other 3D scanning techniques: greater versatility in terms of application range and technical implementation, scalability, photorealistic restitution, reduced requirements relating to hardware, labour, time, and cost, and is, therefore, the best choice to document and value effectively large skeletal samples and collections

    Three dimensional study to quantify the relationship between facial hard and soft tissue movement as a result of orthognathic surgery

    Get PDF
    Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes.   Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: ‱ Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. ‱ Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. ‱ The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions

    3D Shape Descriptor-Based Facial Landmark Detection: A Machine Learning Approach

    Get PDF
    Facial landmark detection on 3D human faces has had numerous applications in the literature such as establishing point-to-point correspondence between 3D face models which is itself a key step for a wide range of applications like 3D face detection and authentication, matching, reconstruction, and retrieval, to name a few. Two groups of approaches, namely knowledge-driven and data-driven approaches, have been employed for facial landmarking in the literature. Knowledge-driven techniques are the traditional approaches that have been widely used to locate landmarks on human faces. In these approaches, a user with sucient knowledge and experience usually denes features to be extracted as the landmarks. Data-driven techniques, on the other hand, take advantage of machine learning algorithms to detect prominent features on 3D face models. Besides the key advantages, each category of these techniques has limitations that prevent it from generating the most reliable results. In this work we propose to combine the strengths of the two approaches to detect facial landmarks in a more ecient and precise way. The suggested approach consists of two phases. First, some salient features of the faces are extracted using expert systems. Afterwards, these points are used as the initial control points in the well-known Thin Plate Spline (TPS) technique to deform the input face towards a reference face model. Second, by exploring and utilizing multiple machine learning algorithms another group of landmarks are extracted. The data-driven landmark detection step is performed in a supervised manner providing an information-rich set of training data in which a set of local descriptors are computed and used to train the algorithm. We then, use the detected landmarks for establishing point-to-point correspondence between the 3D human faces mainly using an improved version of Iterative Closest Point (ICP) algorithms. Furthermore, we propose to use the detected landmarks for 3D face matching applications

    Facial Texture Super-Resolution by Fitting 3D Face Models

    Get PDF
    This book proposes to solve the low-resolution (LR) facial analysis problem with 3D face super-resolution (FSR). A complete processing chain is presented towards effective 3D FSR in real world. To deal with the extreme challenges of incorporating 3D modeling under the ill-posed LR condition, a novel workflow coupling automatic localization of 2D facial feature points and 3D shape reconstruction is developed, leading to a robust pipeline for pose-invariant hallucination of the 3D facial texture

    Spatially Dense 3D Facial Heritability and Modules of Co-heritability in a Father-Offspring Design

    Get PDF
    Introduction: The human face is a complex trait displaying a strong genetic component as illustrated by various studies on facial heritability. Most of these start from sparse descriptions of facial shape using a limited set of landmarks. Subsequently, facial features are preselected as univariate measurements or principal components and the heritability is estimated for each of these features separately. However, none of these studies investigated multivariate facial features, nor the co-heritability between different facial features. Here we report a spatially dense multivariate analysis of facial heritability and co-heritability starting from data from fathers and their children available within ALSPAC. Additionally, we provide an elaborate overview of related craniofacial heritability studies.Methods: In total, 3D facial images of 762 father-offspring pairs were retained after quality control. An anthropometric mask was applied to these images to establish spatially dense quasi-landmark configurations. Partial least squares regression was performed and the (co-)heritability for all quasi-landmarks (∌7160) was computed as twice the regression coefficient. Subsequently, these were used as input to a hierarchical facial segmentation, resulting in the definition of facial modules that are internally integrated through the biological mechanisms of inheritance. Finally, multivariate heritability estimates were obtained for each of the resulting modules.Results: Nearly all modular estimates reached statistical significance under 1,000,000 permutations and after multiple testing correction (p ≀ 1.3889 × 10-3), displaying low to high heritability scores. Particular facial areas showing the greatest heritability were similar for both sons and daughters. However, higher estimates were obtained in the former. These areas included the global face, upper facial part (encompassing the nasion, zygomas and forehead) and nose, with values reaching 82% in boys and 72% in girls. The lower parts of the face only showed low to moderate levels of heritability.Conclusion: In this work, we refrain from reducing facial variation to a series of individual measurements and analyze the heritability and co-heritability from spatially dense landmark configurations at multiple levels of organization. Finally, a multivariate estimation of heritability for global-to-local facial segments is reported. Knowledge of the genetic determination of facial shape is useful in the identification of genetic variants that underlie normal-range facial variation
    • 

    corecore