536 research outputs found

    Forensic Facial Reconstruction from Skeletal Remains

    Get PDF
    The identity of a skull in forensic is of critical importance. Forensic facial reconstruction is the reproduction of the lost or unknown facial features of an individual. In this paper, we propose the automation of the reconstruction process. For a given skull, a data-driven 3D generative model of the face is constructed using a database of CT head scans. The reconstruction can be constrained based on prior knowledge of parameters such as bone thickness measurements, cranial landmark distance measurements and demographics (age, weight, height, and BMI). The CT scan slices are segmented and a 3D model skull of 2D slices is generated with the help of Marching Cubes Algorithm. The 66 Landmark points are then calculated using Active Shape Models and PCA algorithm and placed on the skull. These Landmark points act as references for tissue generation. The facial soft tissue thickness is measured and estimated at the 66 craniometric landmarks used in forensic facial reconstruction. The skin mesh is generated using Delaunay automatic triangulation method. The performance of this model is then measured using RSME technique. The aim of this study is to develop a combination of techniques and algorithms to give the most accurate and efficient results

    Image processing for plastic surgery planning

    Get PDF
    This thesis presents some image processing tools for plastic surgery planning. In particular, it presents a novel method that combines local and global context in a probabilistic relaxation framework to identify cephalometric landmarks used in Maxillofacial plastic surgery. It also uses a method that utilises global and local symmetry to identify abnormalities in CT frontal images of the human body. The proposed methodologies are evaluated with the help of several clinical data supplied by collaborating plastic surgeons

    Heritability maps of human face morphology through large-scale automated three-dimensional phenotyping

    Get PDF
    The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info)

    A Survey on Artificial Intelligence Techniques for Biomedical Image Analysis in Skeleton-Based Forensic Human Identification

    Get PDF
    This paper represents the first survey on the application of AI techniques for the analysis of biomedical images with forensic human identification purposes. Human identification is of great relevance in today’s society and, in particular, in medico-legal contexts. As consequence, all technological advances that are introduced in this field can contribute to the increasing necessity for accurate and robust tools that allow for establishing and verifying human identity. We first describe the importance and applicability of forensic anthropology in many identification scenarios. Later, we present the main trends related to the application of computer vision, machine learning and soft computing techniques to the estimation of the biological profile, the identification through comparative radiography and craniofacial superimposition, traumatism and pathology analysis, as well as facial reconstruction. The potentialities and limitations of the employed approaches are described, and we conclude with a discussion about methodological issues and future research.Spanish Ministry of Science, Innovation and UniversitiesEuropean Union (EU) PGC2018-101216-B-I00Regional Government of Andalusia under grant EXAISFI P18-FR-4262Instituto de Salud Carlos IIIEuropean Union (EU) DTS18/00136European Commission H2020-MSCA-IF-2016 through the Skeleton-ID Marie Curie Individual Fellowship 746592Spanish Ministry of Science, Innovation and Universities-CDTI, Neotec program 2019 EXP-00122609/SNEO-20191236European Union (EU)Xunta de Galicia ED431G 2019/01European Union (EU) RTI2018-095894-B-I0

    Three-dimensional morphanalysis of the face.

    Get PDF
    The aim of the work reported in this thesis was to determine the extent to which orthogonal two-dimensional morphanalytic (universally relatable) craniofacial imaging methods can be extended into the realm of computer-based three-dimensional imaging. New methods are presented for capturing universally relatable laser-video surface data, for inter-relating facial surface scans and for constructing probabilistic facial averages. Universally relatable surface scans are captured using the fixed relations principle com- bined with a new laser-video scanner calibration method. Inter- subject comparison of facial surface scans is achieved using inter- active feature labelling and warping methods. These methods have been extended to groups of subjects to allow the construction of three-dimensional probabilistic facial averages. The potential of universally relatable facial surface data for applications such as growth studies and patient assessment is demonstrated. In addition, new methods for scattered data interpolation, for controlling overlap in image warping and a fast, high-resolution method for simulating craniofacial surgery are described. The results demonstrate that it is not only possible to extend universally relatable imaging into three dimensions, but that the extension also enhances the established methods, providing a wide range of new applications

    3D facial landmark localization for cephalometric analysis

    Get PDF
    Cephalometric analysis is an important and routine task in the medical field to assess craniofacial development and to diagnose cranial deformities and midline facial abnormalities. The advance of 3D digital techniques potentiated the development of 3D cephalometry, which includes the localization of cephalometric landmarks in the 3D models. However, manual labeling is still applied, being a tedious and time-consuming task, highly prone to intra/inter-observer variability. In this paper, a framework to automatically locate cephalometric landmarks in 3D facial models is presented. The landmark detector is divided into two stages: (i) creation of 2D maps representative of the 3D model; and (ii) landmarks' detection through a regression convolutional neural network (CNN). In the first step, the 3D facial model is transformed to 2D maps retrieved from 3D shape descriptors. In the second stage, a CNN is used to estimate a probability map for each landmark using the 2D representations as input. The detection method was evaluated in three different datasets of 3D facial models, namely the Texas 3DFR, the BU3DFE, and the Bosphorus databases. An average distance error of 2.3, 3.0, and 3.2 mm were obtained for the landmarks evaluated on each dataset. The obtained results demonstrated the accuracy of the method in different 3D facial datasets with a performance competitive to the state-of-the-art methods, allowing to prove its versability to different 3D models. Clinical Relevance - Overall, the performance of the landmark detector demonstrated its potential to be used for 3D cephalometric analysis.FCT - Fundação para a Ciência e a Tecnologia(LASI-LA/P/0104/2020

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications

    Pattern recognition to detect fetal alchohol syndrome using stereo facial images

    Get PDF
    Fetal alcohol syndrome (FAS) is a condition which is caused by excessive consumption of alcohol by the mother during pregnancy. A FAS diagnosis depends on the presence of growth retardation, central nervous system and neurodevelopment abnormalities together with facial malformations. The main facial features which best distinguish children with and without FAS are smooth philtrum, thin upper lip and short palpebral fissures. Diagnosis of the facial phenotype associated with FAS can be done using methods such as direct facial anthropometry and photogrammetry. The project described here used information obtained from stereo facial images and applied facial shape analysis and pattern recognition to distinguish between children with FAS and control children. Other researches have reported on identifying FAS through the classification of 2D landmark coordinates and 3D landmark information in the form of Procrustes residuals. This project built on this previous work with the use of 3D information combined with texture as features for facial classification. Stereo facial images of children were used to obtain the 3D coordinates of those facial landmarks which play a role in defining the FAS facial phenotype. Two datasets were used: the first consisted of facial images of 34 children whose facial shapes had previously been analysed with respect to FAS. The second dataset consisted of a new set of images from 40 subjects. Elastic bunch graph matching was used on the frontal facial images of the study populaiii tion to obtain texture information, in the form of jets, around selected landmarks. Their 2D coordinates were also extracted during the process. Faces were classified using knearest neighbor (kNN), linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Principal component analysis was used for dimensionality reduction while classification accuracy was assessed using leave-one-out cross-validation. For dataset 1, using 2D coordinates together with texture information as features during classification produced a best classification accuracy of 72.7% with kNN, 75.8% with LDA and 78.8% with SVM. When the 2D coordinates were replaced by Procrustes residuals (which encode 3D facial shape information), the best classification accuracies were 69.7% with kNN, 81.8% with LDA and 78.6% with SVM. LDA produced the most consistent classification results. The classification accuracies for dataset 2 were lower than for dataset 1. The different conditions during data collection and the possible differences in the ethnic composition of the datasets were identified as likely causes for this decrease in classification accuracy

    3D statistical shape analysis of the face in Apert syndrome

    Get PDF
    Timely diagnosis of craniofacial syndromes as well as adequate timing and choice of surgical technique are essential for proper care management. Statistical shape models and machine learning approaches are playing an increasing role in Medicine and have proven its usefulness. Frameworks that automate processes have become more popular. The use of 2D photographs for automated syndromic identification has shown its potential with the Face2Gene application. Yet, using 3D shape information without texture has not been studied in such depth. Moreover, the use of these models to understand shape change during growth and its applicability for surgical outcome measurements have not been analysed at length. This thesis presents a framework using state-of-the-art machine learning and computer vision algorithms to explore possibilities for automated syndrome identification based on shape information only. The purpose of this was to enhance understanding of the natural development of the Apert syndromic face and its abnormality as compared to a normative group. An additional method was used to objectify changes as result of facial bipartition distraction, a common surgical correction technique, providing information on the successfulness and on inadequacies in terms of facial normalisation. Growth curves were constructed to further quantify facial abnormalities in Apert syndrome over time along with 3D shape models for intuitive visualisation of the shape variations. Post-operative models were built and compared with age-matched normative data to understand where normalisation is coming short. The findings in this thesis provide markers for future translational research and may accelerate the adoption of the next generation diagnostics and surgical planning tools to further supplement the clinical decision-making process and ultimately to improve patients’ quality of life
    corecore