7 research outputs found

    3D facial landmark localization for cephalometric analysis

    Get PDF
    Cephalometric analysis is an important and routine task in the medical field to assess craniofacial development and to diagnose cranial deformities and midline facial abnormalities. The advance of 3D digital techniques potentiated the development of 3D cephalometry, which includes the localization of cephalometric landmarks in the 3D models. However, manual labeling is still applied, being a tedious and time-consuming task, highly prone to intra/inter-observer variability. In this paper, a framework to automatically locate cephalometric landmarks in 3D facial models is presented. The landmark detector is divided into two stages: (i) creation of 2D maps representative of the 3D model; and (ii) landmarks' detection through a regression convolutional neural network (CNN). In the first step, the 3D facial model is transformed to 2D maps retrieved from 3D shape descriptors. In the second stage, a CNN is used to estimate a probability map for each landmark using the 2D representations as input. The detection method was evaluated in three different datasets of 3D facial models, namely the Texas 3DFR, the BU3DFE, and the Bosphorus databases. An average distance error of 2.3, 3.0, and 3.2 mm were obtained for the landmarks evaluated on each dataset. The obtained results demonstrated the accuracy of the method in different 3D facial datasets with a performance competitive to the state-of-the-art methods, allowing to prove its versability to different 3D models. Clinical Relevance - Overall, the performance of the landmark detector demonstrated its potential to be used for 3D cephalometric analysis.FCT - Fundação para a Ciência e a Tecnologia(LASI-LA/P/0104/2020

    Large-scale geo-facial image analysis

    Get PDF
    While face analysis from images is a well-studied area, little work has explored the dependence of facial appearance on the geographic location from which the image was captured. To fill this gap, we constructed GeoFaces, a large dataset of geotagged face images, and used it to examine the geo-dependence of facial features and attributes, such as ethnicity, gender, or the presence of facial hair. Our analysis illuminates the relationship between raw facial appearance, facial attributes, and geographic location, both globally and in selected major urban areas. Some of our experiments, and the resulting visualizations, confirm prior expectations, such as the predominance of ethnically Asian faces in Asia, while others highlight novel information that can be obtained with this type of analysis, such as the major city with the highest percentage of people with a mustache

    Large-Scale Geo-Facial Image Analysis

    Get PDF
    While face analysis from images is a well-studied area, little work has explored the dependence of facial appearance on the geographic location from which the image was captured. To fill this gap, we constructed GeoFaces, a large dataset of geotagged face images, and used it to examine the geo-dependence of facial features and attributes, such as ethnicity, gender, or the presence of facial hair. Our analysis illuminates the relationship between raw facial appearance, facial attributes, and geographic location, both globally and in selected major urban areas. Some of our experiments, and the resulting visualizations, confirm prior expectations, such as the predominance of ethnically Asian faces in Asia, while others highlight novel information that can be obtained with this type of analysis, such as the major city with the highest percentage of people with a mustache

    Real time 3D face alignment with random forests-based active appearance models

    No full text
    Many desirable applications dealing with automatic face analysis rely on robust facial feature localization. While extensive research has been carried out on standard 2D imagery, recent technological advances made the acquisition of 3D data both accurate and affordable, opening new ways to more accurate and robust algorithms. We present a model-based approach to real time face alignment, fitting a 3D model to depth and intensity images of unseen expressive faces. We use random regression forests to drive the fitting in an Active Appearance Model framework. We thoroughly evaluated the proposed approach on publicly available datasets and show how adding the depth channel boosts the robustness and accuracy of the algorithm. © 2013 IEEE.Fanelli G., Dantone M., Van Gool L., ''Real time 3D face alignment with random forests-based active appearance models'', 10th IEEE international conference on automatic face and gesture recognition - FG-2013, 8 pp., April 22-26, 2013, Shanghai, China.status: publishe

    Real time 3D face alignment with Random Forests-based Active Appearance Models

    No full text
    corecore