2,071 research outputs found
Heritability maps of human face morphology through large-scale automated three-dimensional phenotyping
The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info)
Spatially dense 3D facial heritability and modules of co-heritability in a father-offspring design
Introduction: The human face is a complex trait displaying a strong genetic component as illustrated by various studies on facial heritability. Most of these start from sparse descriptions of facial shape using a limited set of landmarks. Subsequently, facial features are preselected as univariate measurements or principal components and the heritability is estimated for each of these features separately. However, none of these studies investigated multivariate facial features, nor the co-heritability between different facial features. Here we report a spatially dense multivariate analysis of facial heritability and co-heritability starting from data from fathers and their children available within ALSPAC. Additionally, we provide an elaborate overview of related craniofacial heritability studies. Methods: In total, 3D facial images of 762 father-offspring pairs were retained after quality control. An anthropometric mask was applied to these images to establish spatially dense quasi-landmark configurations. Partial least squares regression was performed and the (co-)heritability for all quasi-landmarks (∼7160) was computed as twice the regression coefficient. Subsequently, these were used as input to a hierarchical facial segmentation, resulting in the definition of facial modules that are internally integrated through the biological mechanisms of inheritance. Finally, multivariate heritability estimates were obtained for each of the resulting modules. Results: Nearly all modular estimates reached statistical significance under 1,000,000 permutations and after multiple testing correction (p ≤ 1.3889 × 10-3), displaying low to high heritability scores. Particular facial areas showing the greatest heritability were similar for both sons and daughters. However, higher estimates were obtained in the former. These areas included the global face, upper facial part (encompassing the nasion, zygomas and forehead) and nose, with values reaching 82% in boys and 72% in girls. The lower parts of the face only showed low to moderate levels of heritability. Conclusion: In this work, we refrain from reducing facial variation to a series of individual measurements and analyze the heritability and co-heritability from spatially dense landmark configurations at multiple levels of organization. Finally, a multivariate estimation of heritability for global-to-local facial segments is reported. Knowledge of the genetic determination of facial shape is useful in the identification of genetic variants that underlie normal-range facial variation
Face modeling for face recognition in the wild.
Face understanding is considered one of the most important topics in computer vision field since the face is a rich source of information in social interaction. Not only does the face provide information about the identity of people, but also of their membership in broad demographic categories (including sex, race, and age), and about their current emotional state. Facial landmarks extraction is the corner stone in the success of different facial analyses and understanding applications. In this dissertation, a novel facial modeling is designed for facial landmarks detection in unconstrained real life environment from different image modalities including infra-red and visible images. In the proposed facial landmarks detector, a part based model is incorporated with holistic face information. In the part based model, the face is modeled by the appearance of different face part(e.g., right eye, left eye, left eyebrow, nose, mouth) and their geometric relation. The appearance is described by a novel feature referred to as pixel difference feature. This representation is three times faster than the state-of-art in feature representation. On the other hand, to model the geometric relation between the face parts, the complex Bingham distribution is adapted from the statistical community into computer vision for modeling the geometric relationship between the facial elements. The global information is incorporated with the local part model using a regression model. The model results outperform the state-of-art in detecting facial landmarks. The proposed facial landmark detector is tested in two computer vision problems: boosting the performance of face detectors by rejecting pseudo faces and camera steering in multi-camera network. To highlight the applicability of the proposed model for different image modalities, it has been studied in two face understanding applications which are face recognition from visible images and physiological measurements for autistic individuals from thermal images. Recognizing identities from faces under different poses, expressions and lighting conditions from a complex background is an still unsolved problem even with accurate detection of landmark. Therefore, a learning similarity measure is proposed. The proposed measure responds only to the difference in identities and filter illuminations and pose variations. similarity measure makes use of statistical inference in the image plane. Additionally, the pose challenge is tackled by two new approaches: assigning different weights for different face part based on their visibility in image plane at different pose angles and synthesizing virtual facial images for each subject at different poses from single frontal image. The proposed framework is demonstrated to be competitive with top performing state-of-art methods which is evaluated on standard benchmarks in face recognition in the wild. The other framework for the face understanding application, which is a physiological measures for autistic individual from infra-red images. In this framework, accurate detecting and tracking Superficial Temporal Arteria (STA) while the subject is moving, playing, and interacting in social communication is a must. It is very challenging to track and detect STA since the appearance of the STA region changes over time and it is not discriminative enough from other areas in face region. A novel concept in detection, called supporter collaboration, is introduced. In support collaboration, the STA is detected and tracked with the help of face landmarks and geometric constraint. This research advanced the field of the emotion recognition
Robust signatures for 3D face registration and recognition
PhDBiometric authentication through face recognition has been an active area of
research for the last few decades, motivated by its application-driven demand. The popularity
of face recognition, compared to other biometric methods, is largely due to its
minimum requirement of subject co-operation, relative ease of data capture and similarity
to the natural way humans distinguish each other.
3D face recognition has recently received particular interest since three-dimensional
face scans eliminate or reduce important limitations of 2D face images, such as illumination
changes and pose variations. In fact, three-dimensional face scans are usually captured
by scanners through the use of a constant structured-light source, making them invariant
to environmental changes in illumination. Moreover, a single 3D scan also captures the
entire face structure and allows for accurate pose normalisation.
However, one of the biggest challenges that still remain in three-dimensional face
scans is the sensitivity to large local deformations due to, for example, facial expressions.
Due to the nature of the data, deformations bring about large changes in the 3D geometry
of the scan. In addition to this, 3D scans are also characterised by noise and artefacts such
as spikes and holes, which are uncommon with 2D images and requires a pre-processing
stage that is speci c to the scanner used to capture the data.
The aim of this thesis is to devise a face signature that is compact in size and
overcomes the above mentioned limitations. We investigate the use of facial regions and
landmarks towards a robust and compact face signature, and we study, implement and
validate a region-based and a landmark-based face signature. Combinations of regions and
landmarks are evaluated for their robustness to pose and expressions, while the matching
scheme is evaluated for its robustness to noise and data artefacts
Recommended from our members
Novel algorithms for 3D human face recognition
textAutomated human face recognition is a computer vision problem of considerable practical significance. Existing two dimensional (2D) face recognition techniques perform poorly for faces with uncontrolled poses, lighting and facial expressions. Face recognition technology based on three dimensional (3D) facial models is now emerging. Geometric facial models can be easily corrected for pose variations. They are illumination invariant, and provide structural information about the facial surface. Algorithms for 3D face recognition exist, however the area is far from being a matured technology. In this dissertation we address a number of open questions in the area of 3D human face recognition. Firstly, we make available to qualified researchers in the field, at no cost, a large Texas 3D Face Recognition Database, which was acquired as a part of this research work. This database contains 1149 2D and 3D images of 118 subjects. We also provide 25 manually located facial fiducial points on each face in this database. Our next contribution is the development of a completely automatic novel 3D face recognition algorithm, which employs discriminatory anthropometric distances between carefully selected local facial features. This algorithm neither uses general purpose pattern recognition approaches, nor does it directly extend 2D face recognition techniques to the 3D domain. Instead, it is based on an understanding of the structurally diverse characteristics of human faces, which we isolate from the scientific discipline of facial anthropometry. We demonstrate the effectiveness and superior performance of the proposed algorithm, relative to existing benchmark 3D face recognition algorithms. A related contribution is the development of highly accurate and reliable 2D+3D algorithms for automatically detecting 10 anthropometric facial fiducial points. While developing these algorithms, we identify unique structural/textural properties associated with the facial fiducial points. Furthermore, unlike previous algorithms for detecting facial fiducial points, we systematically evaluate our algorithms against manually located facial fiducial points on a large database of images. Our third contribution is the development of an effective algorithm for computing the structural dissimilarity of 3D facial surfaces, which uses a recently developed image similarity index called the complex-wavelet structural similarity index. This algorithm is unique in that unlike existing approaches, it does not require that the facial surfaces be finely registered before they are compared. Furthermore, it is nearly an order of magnitude more accurate than existing facial surface matching based approaches. Finally, we propose a simple method to combine the two new 3D face recognition algorithms that we developed, resulting in a 3D face recognition algorithm that is competitive with the existing state-of-the-art algorithms.Electrical and Computer Engineerin
The reconstructed cranium of Pierolapithecus and the evolution of the great ape face
Pierolapithecus catalaunicus (~12 million years ago, northeastern Spain) is key to understanding the mosaic nature of hominid (great ape and human) evolution. Notably, its skeleton indicates that an orthograde (upright) body plan preceded suspensory adaptations in hominid evolution. However, there is ongoing debate about this species, partly because the sole known cranium, preserving a nearly complete face, suffers from taphonomic damage. We 1) carried out a micro computerized tomography (CT) based virtual reconstruction of the Pierolapithecus cranium, 2) assessed its morphological affinities using a series of two-dimensional (2D) and three-dimensional (3D) morphometric analyses, and 3) modeled the evolution of key aspects of ape face form. The reconstruction clarifies many aspects of the facial morphology of Pierolapithecus. Our results indicate that it is most similar to great apes (fossil and extant) in overall face shape and size and is morphologically distinct from other Middle Miocene apes. Crown great apes can be distinguished from other taxa in several facial metrics (e.g., low midfacial prognathism, relatively tall faces) and only some of these features are found in Pierolapithecus, which is most consistent with a stem (basal) hominid position. The inferred morphology at all ancestral nodes within the hominoid (ape and human) tree is closer to great apes than to hylobatids (gibbons and siamangs), which are convergent with other smaller anthropoids. Our analyses support a hominid ancestor that was distinct from all extant and fossil hominids in overall facial shape and shared many features with Pierolapithecus. This reconstructed ancestral morphotype represents a testable hypothesis that can be reevaluated as new fossils are discovered.Fil: Pugh, Kelsey D.. City University of New York; Estados UnidosFil: Catalano, Santiago Andres. Consejo Nacional de Investigaciones CientÃficas y Técnicas. Centro CientÃfico Tecnológico - Tucumán. Unidad Ejecutora Lillo; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Naturales e Instituto Miguel Lillo; ArgentinaFil: Pérez de los RÃos, Miriam. Universidad Complutense de Madrid. Facultad de BiologÃa; EspañaFil: Fortuny, Josep. Institut Català de Paleontologia Miquel Crusafont.; EspañaFil: Shearer, Brian M.. New York Consortium in Evolutionary Primatology; Estados Unidos. New York University Grossman School of Medicine; Estados UnidosFil: Vecino Gazabón, Alessandra. American Museum of Natural History; Estados Unidos. New York Consortium in Evolutionary Primatology; Estados UnidosFil: Hammond, Ashley S.. American Museum of Natural History; Estados Unidos. New York Consortium in Evolutionary Primatology; Estados UnidosFil: Moyà Solà , Salvador. Institut Català de Paleontologia Miquel Crusafont.; España. Institució Catalana de Recerca i Estudis Avancats; España. Universitat Autònoma de Barcelona; EspañaFil: Alba, David M.. Institut Català de Paleontologia Miquel Crusafont.; EspañaFil: Almécija, Sergio. American Museum of Natural History; Estados Unidos. New York Consortium in Evolutionary Primatology; Estados Unidos. Institut Català de Paleontologia Miquel Crusafont; Españ
- …