530 research outputs found

    Modeling of Facial Aging and Kinship: A Survey

    Full text link
    Computational facial models that capture properties of facial cues related to aging and kinship increasingly attract the attention of the research community, enabling the development of reliable methods for age progression, age estimation, age-invariant facial characterization, and kinship verification from visual data. In this paper, we review recent advances in modeling of facial aging and kinship. In particular, we provide an up-to date, complete list of available annotated datasets and an in-depth analysis of geometric, hand-crafted, and learned facial representations that are used for facial aging and kinship characterization. Moreover, evaluation protocols and metrics are reviewed and notable experimental results for each surveyed task are analyzed. This survey allows us to identify challenges and discuss future research directions for the development of robust facial models in real-world conditions

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications

    Recovering joint and individual components in facial data

    Get PDF
    A set of images depicting faces with different expressions or in various ages consists of components that are shared across all images (i.e., joint components) and imparts to the depicted object the properties of human faces and individual components that are related to different expressions or age groups. Discovering the common (joint) and individual components in facial images is crucial for applications such as facial expression transfer. The problem is rather challenging when dealing with images captured in unconstrained conditions and thus are possibly contaminated by sparse non-Gaussian errors of large magnitude (i.e., sparse gross errors) and contain missing data. In this paper, we investigate the use of a method recently introduced in statistics, the so-called Joint and Individual Variance Explained (JIVE) method, for the robust recovery of joint and individual components in visual facial data consisting of an arbitrary number of views. Since, the JIVE is not robust to sparse gross errors, we propose alternatives, which are 1) robust to sparse gross, non-Gaussian noise, 2) able to automatically find the individual components rank, and 3) can handle missing data. We demonstrate the effectiveness of the proposed methods to several computer vision applications, namely facial expression synthesis and 2D and 3D face age progression in-the-wild

    DICTIONARIES AND MANIFOLDS FOR FACE RECOGNITION ACROSS ILLUMINATION, AGING AND QUANTIZATION

    Get PDF
    During the past many decades, many face recognition algorithms have been proposed. The face recognition problem under controlled environment has been well studied and almost solved. However, in unconstrained environments, the performance of face recognition methods could still be significantly affected by factors such as illumination, pose, resolution, occlusion, aging, etc. In this thesis, we look into the problem of face recognition across these variations and quantization. We present a face recognition algorithm based on simultaneous sparse approximations under varying illumination and pose with dictionaries learned for each class. A novel test image is projected onto the span of the atoms in each learned dictionary. The resulting residual vectors are then used for classification. An image relighting technique based on pose-robust albedo estimation is used to generate multiple frontal images of the same person with variable lighting. As a result, the proposed algorithm has the ability to recognize human faces with high accuracy even when only a single or a very few images per person are provided for training. The efficiency of the proposed method is demonstrated using publicly available databases and it is shown that this method is efficient and can perform significantly better than many competitive face recognition algorithms. The problem of recognizing facial images across aging remains an open problem. We look into this problem by studying the growth in the facial shapes. Building on recent advances in landmark extraction, and statistical techniques for landmark-based shape analysis, we show that using well-defined shape spaces and its associated geometry, one can obtain significant performance improvements in face verification. Toward this end, we propose to model the facial shapes as points on a Grassmann manifold. The face verification problem is then formulated as a classification problem on this manifold. We then propose a relative craniofacial growth model which is based on the science of craniofacial anthropometry and integrate it with the Grassmann manifold and the SVM classifier. Experiments show that the proposed method is able to mitigate the variations caused by the aging progress and thus effectively improve the performance of open-set face verification across aging. In applications such as document understanding, only binary face images may be available as inputs to a face recognition algorithm. We investigate the effects of quantization on several classical face recognition algorithms. We study the performances of PCA and multiple exemplar discriminant analysis (MEDA) algorithms with quantized images and with binary images modified by distance and Box-Cox transforms. We propose a dictionary-based method for reconstructing the grey scale facial images from the quantized facial images. Two dictionaries with low mutual coherence are learned for the grey scale and quantized training images respectively using a modified KSVD method. A linear transform function between the sparse vectors of quantized images and the sparse vectors of grey scale images is estimated using the training data. In the testing stage, a grey scale image is reconstructed from the quantized image using the transform matrix and normalized dictionaries. The identities of the reconstructed grey scale images are then determined using the dictionary-based face recognition (DFR) algorithm. Experimental results show that the reconstructed images are similar to the original grey-scale images and the performance of face recognition on the quantized images is comparable to the performance on grey scale images. The online social network and social media is growing rapidly. It is interesting to study the impact of social network on computer vision algorithms. We address the problem of automated face recognition on a social network using a loopy belief propagation framework. The proposed approach propagates the identities of faces in photos across social graphs. We characterize its performance in terms of structural properties of the given social network. We propose a distance metric defined using face recognition results for detecting hidden connections. The performance of the proposed method is analyzed on graph structure networks, scalability, different degrees of nodes, labeling errors correction and hidden connections discovery. The result demonstrates that the constraints imposed by the social network have the potential to improve the performance of face recognition methods. The result also shows it is possible to discover hidden connections in a social network based on face recognition

    Facial Analysis: Looking at Biometric Recognition and Genome-Wide Association

    Get PDF

    Cross-Age Face Verification Using Generative Adversarial Networks (GAN) with Landmark Feature

    Get PDF
    Cross-age face verification is a complex problem in biometric recognition in terms of aging, a naturally changing face structure, and face landmark configuration changes over time. In this paper, a new cross-age face verification method is proposed with a Generative Adversarial Network (GAN) and a mix of landmark-based features. Realistic aging of a face with identity-specific landmarks, such as eyes, nose, and mouth, is generated for effective face recognition in a range of age groups. Performance testing with an in-house collected face dataset of 200 face images exhibited effectiveness in changing face configuration and face shape transformations, such as a fuller face thinning and thin face becoming fuller. Comparison with direct face verification showed increased values of similarity, such as 32.57% to 63.80%, reduced values of feature distance, such as 0.6743 to 0.3620, and improvement in accuracy for the ArcFace, VGG-Face, and Facenet architectures. ArcFace exhibited an improvement in accuracy with an increase in value from 82.64% to 86.02%, VGG-Face with an improvement in value from 76.23% to 80.57%, and Facenet with an improvement in value from 67.54% to 74.48%. These observations validate the effectiveness of the proposed method in overcoming age-related complications and improving cross-age face verification performance. In future work, we plan to investigate a larger dataset and model refinement to realize performance improvement and real-life biometric suitability

    Characterization and Classification of Faces across Age Progression

    Get PDF
    Facial aging, a new dimension that has recently been added to the problem of face recognition, poses interesting theoretical and practical challenges to the research community . How do humans perceive age ? What constitutes an age-invariant signature for faces ? How do we model facial growth across different ages ? How does facial aging effects impact recognition performance ? This thesis provides a thorough overview of the problem of facial aging and addresses the aforementioned questions. We propose a craniofacial growth model that characterizes growth related shape variations observed in human faces during formative years (0 - 18 yrs). The craniofacial growth model draws inspiration from the `revised' cardioidal strain transformation model proposed in psychophysics and further, incorporates age-based anthropometric evidences collected on facial growth during formative years. Identifying a set of fiducial features on faces, we characterize facial growth by means of growth parameters estimated on the fiducial features. We illustrate how the growth related transformations observed on facial proportions can be studied by means of linear and non-linear equations in facial growth parameters, which subsequently help in computing the growth parameters. The proposed growth model implicitly accounts for factors such as gender, ethnicity, the individual's age group etc. Predicting one's appearance across ages, performing face verification across ages etc. are some of the intended applications of the model. Next, we propose a two-fold approach towards modeling facial aging in adults. Firstly, we develop a shape transformation model that is formulated as a physically-based parametric muscle model that captures the subtle deformations facial features undergo with age. The model implicitly accounts for the physical properties and geometric orientations of the individual facial muscles. Next, we develop an image gradient based texture transformation function that characterizes facial wrinkles and other skin artifacts often observed during different ages. Facial growth statistics (both in terms of shape and texture) play a crucial role in developing the aforementioned transformation models. From a database that comprises of pairs of age separated face images of many individuals, we extract age-based facial measurements across key fiducial features and further, study textural variations across ages. We present experimental results that illustrate the applications of the proposed facial aging model in tasks such as face verification and facial appearance prediction across aging. How sensitive are face verification systems to facial aging effects ? How does age progression affect the similarity between a pair of face images of an individual ? We develop a Bayesian age difference classifier that classifies face images of individuals based on age differences and performs face verification across age progression. Further, we study the similarity of faces across age progression. Since age separated face images invariably differ in illumination and pose, we propose pre-processing methods for minimizing such variations. Experimental results using a database comprising of pairs of face images that were retrieved from the passports of 465 individuals are presented. The verification system for faces separated by as many as 9 years, attains an equal error rate of 8.5%

    Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The brain develops in concert and in coordination with the developing facial tissues, with each influencing the development of the other and sharing genetic signaling pathways. Autism spectrum disorders (ASDs) result from alterations in the embryological brain, suggesting that the development of the faces of children with ASD may result in subtle facial differences compared to typically developing children. In this study, we tested two hypotheses. First, we asked whether children with ASD display a subtle but distinct facial phenotype compared to typically developing children. Second, we sought to determine whether there are subgroups of facial phenotypes within the population of children with ASD that denote biologically discrete subgroups.</p> <p>Methods</p> <p>The 3dMD cranial System was used to acquire three-dimensional stereophotogrammetric images for our study sample of 8- to 12-year-old boys diagnosed with essential ASD (<it>n </it>= 65) and typically developing boys (<it>n </it>= 41) following approved Institutional Review Board protocols. Three-dimensional coordinates were recorded for 17 facial anthropometric landmarks using the 3dMD Patient software. Statistical comparisons of facial phenotypes were completed using Euclidean Distance Matrix Analysis and Principal Coordinates Analysis. Data representing clinical and behavioral traits were statistically compared among groups by using χ<sup>2 </sup>tests, Fisher's exact tests, Kolmogorov-Smirnov tests and Student's <it>t</it>-tests where appropriate.</p> <p>Results</p> <p>First, we found that there are significant differences in facial morphology in boys with ASD compared to typically developing boys. Second, we also found two subgroups of boys with ASD with facial morphology that differed from the majority of the boys with ASD and the typically developing boys. Furthermore, membership in each of these distinct subgroups was correlated with particular clinical and behavioral traits.</p> <p>Conclusions</p> <p>Boys with ASD display a facial phenotype distinct from that of typically developing boys, which may reflect alterations in the prenatal development of the brain. Subgroups of boys with ASD defined by distinct facial morphologies correlated with clinical and behavioral traits, suggesting potentially different etiologies and genetic differences compared to the larger group of boys with ASD. Further investigations into genes involved in neurodevelopment and craniofacial development of these subgroups will help to elucidate the causes and significance of these subtle facial differences.</p
    corecore