100 research outputs found

    Biological landmark Vs quasi-landmarks for 3D face recognition and gender classification

    Get PDF
    Face recognition and gender classification are vital topics in the field of computer graphic and pattern recognition. We utilized ideas from two growing ideas in computer vision, which are biological landmarks and quasi-landmarks (dense mesh) to propose a novel approach to compare their performance in face recognition and gender classification. The experimental work is conducted on FRRGv2 dataset and acquired 98% and 94% face recognition accuracies using the quasi and biological landmarks respectively. The gender classification accuracies are 92% for quasi-landmarks and 90% for biological landmarks

    3d Face Reconstruction And Emotion Analytics With Part-Based Morphable Models

    Get PDF
    3D face reconstruction and facial expression analytics using 3D facial data are new and hot research topics in computer graphics and computer vision. In this proposal, we first review the background knowledge for emotion analytics using 3D morphable face model, including geometry feature-based methods, statistic model-based methods and more advanced deep learning-bade methods. Then, we introduce a novel 3D face modeling and reconstruction solution that robustly and accurately acquires 3D face models from a couple of images captured by a single smartphone camera. Two selfie photos of a subject taken from the front and side are used to guide our Non-Negative Matrix Factorization (NMF) induced part-based face model to iteratively reconstruct an initial 3D face of the subject. Then, an iterative detail updating method is applied to the initial generated 3D face to reconstruct facial details through optimizing lighting parameters and local depths. Our iterative 3D face reconstruction method permits fully automatic registration of a part-based face representation to the acquired face data and the detailed 2D/3D features to build a high-quality 3D face model. The NMF part-based face representation learned from a 3D face database facilitates effective global and adaptive local detail data fitting alternatively. Our system is flexible and it allows users to conduct the capture in any uncontrolled environment. We demonstrate the capability of our method by allowing users to capture and reconstruct their 3D faces by themselves. Based on the 3D face model reconstruction, we can analyze the facial expression and the related emotion in 3D space. We present a novel approach to analyze the facial expressions from images and a quantitative information visualization scheme for exploring this type of visual data. From the reconstructed result using NMF part-based morphable 3D face model, basis parameters and a displacement map are extracted as features for facial emotion analysis and visualization. Based upon the features, two Support Vector Regressions (SVRs) are trained to determine the fuzzy Valence-Arousal (VA) values to quantify the emotions. The continuously changing emotion status can be intuitively analyzed by visualizing the VA values in VA-space. Our emotion analysis and visualization system, based on 3D NMF morphable face model, detects expressions robustly from various head poses, face sizes and lighting conditions, and is fully automatic to compute the VA values from images or a sequence of video with various facial expressions. To evaluate our novel method, we test our system on publicly available databases and evaluate the emotion analysis and visualization results. We also apply our method to quantifying emotion changes during motivational interviews. These experiments and applications demonstrate effectiveness and accuracy of our method. In order to improve the expression recognition accuracy, we present a facial expression recognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visual analytics guided 3DMCNN design and optimization scheme. The geometric properties of the surface is computed using the 3D face model of a subject with facial expressions. Instead of using regular Convolutional Neural Network (CNN) to learn intensities of the facial images, we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We design a geodesic distance-based convolution method to overcome the difficulties raised from the irregular sampling of the face surface mesh. We further present an interactive visual analytics for the purpose of designing and modifying the networks to analyze the learned features and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network, the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and analyze the effectiveness of our method by studying representative cases. Testing on public datasets, our method achieves a higher recognition accuracy than traditional image-based CNN and other 3D CNNs. The presented framework, including 3DMCNN and interactive visual analytics of the CNN, can be extended to other applications

    Principles and methods for face recognition and face modelling

    Get PDF
    This chapter focuses on the principles behind methods currently used for face recognition, which have a wide variety of uses from biometrics, surveillance and forensics. After a brief description of how faces can be detected in images, we describe 2D feature extraction methods that operate on all the image pixels in the face detected region: Eigenfaces and Fisherfaces first proposed in the early 1990s. Although Eigenfaces can be made to work reasonably well for faces captured in controlled conditions, such as frontal faces under the same illumination, recognition rates are poor. We discuss how greater accuracy can be achieved by extracting features from the boundaries of the faces by using Active Shape Models and, the skin textures, using Active Appearance Models, originally proposed by Cootes and Talyor. The remainder of the chapter on face recognition is dedicated such shape models, their implementation and use and their extension to 3D. We show that if multiple cameras are used the the 3D geometry of the captured faces can be recovered without the use of range scanning or structured light. 3D face models make recognition systems better at dealiing with pose and lighting variatio

    Gender recognition from facial images: Two or three dimensions?

    Get PDF
    © 2016 Optical Society of America. This paper seeks to compare encoded features from both two-dimensional (2D) and three-dimensional (3D) face images in order to achieve automatic gender recognition with high accuracy and robustness. The Fisher vector encoding method is employed to produce 2D, 3D, and fused features with escalated discriminative power. For 3D face analysis, a two-source photometric stereo (PS) method is introduced that enables 3D surface reconstructions with accurate details as well as desirable efficiency. Moreover, a 2D + 3D imaging device, taking the two-source PS method as its core, has been developed that can simultaneously gather color images for 2D evaluations and PS images for 3D analysis. This system inherits the superior reconstruction accuracy from the standard (three or more light) PS method but simplifies the reconstruction algorithm as well as the hardware design by only requiring two light sources. It also offers great potential for facilitating human computer interaction by being accurate, cheap, efficient, and nonintrusive. Ten types of low-level 2D and 3D features have been experimented with and encoded for Fisher vector gender recognition. Evaluations of the Fisher vector encoding method have been performed on the FERET database, Color FERET database, LFW database, and FRGCv2 database, yielding 97.7%, 98.0%, 92.5%, and 96.7% accuracy, respectively. In addition, the comparison of 2D and 3D features has been drawn from a self-collected dataset, which is constructed with the aid of the 2D + 3D imaging device in a series of data capture experiments. With a variety of experiments and evaluations, it can be proved that the Fisher vector encoding method outperforms most state-of-the-art gender recognition methods. It has also been observed that 3D features reconstructed by the two-source PS method are able to further boost the Fisher vector gender recognition performance, i.e., up to a 6% increase on the self-collected database

    Feature extraction based face recognition, gender and age classification

    Get PDF
    The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC) algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively

    3D face morphology classification for medical applications

    Get PDF
    Classification of facial morphology traits is an important problem for many medical applications, especially with regard to determining associations between facial morphological traits or facial abnormalities and genetic variants. A modern approach to the classification of facial characteristics(traits) is to use three-dimensional facial images. In clinical practice, classification is usually performed manually, which makes the process very tedious, time-consuming and prone to operator error. Also using simple landmark-to-landmark facial measurements may not accurately represent the underlying complex three-dimensional facial shape. This thesis presents the first automatic approach for classification and categorisation of facial morphological traits with application to lips and nose traits. It also introduces new 3D geodesic curvature features obtained along the geodesic paths between 3D facial anthropometric landmarks. These geometric features were used for lips and nose traits classification and categorisation. Finally, the influence of the discovered categories on the facial physical appearance are analysed using a new visualisation method in order to gain insight into suitability of categories for description of the underlying facial traits. The proposed approach was tested on the ALSPAC (Avon Longitudinal Study of Parents and Children) dataset consisting of 4747 3D full face meshes. The classification accuracy obtained using expert manual categories was not very high, in the region of 72%-79%, indicating that the manual categories may be unreliable. In an attempt to improve these accuracies,an automatic categorisation method was applied. In general,the classification accuracies based on the automatic lip categories were higher than those obtained using the manual categories by at least 8% and the automatic categories were found to be statistically more significant in the lip area than the manual categories. The same approach was used to categorise the nose traits, the result indicating that the proposed categorisation approach was capable of categorising any face morphological trait without the ground truth about its traits categories. Additionally, to test the robustness of the proposed features, they were used in a popular problem of gender classification and analysis. The results demonstrated superior classification accuracy to that of comparable methods. Finally, a discovery phase of a genome wide association analysis(GWAS) was carried out for 11 automatic lip and nose traits categories. As a result, statistically significant associations were found between four traits and six single nucleotide polymorphisms (SNPs). This is a very good result considering that for the 27 manual lip traits categories provided by medical expert, the associations were found between two traits and two SNPs only. This result testifies that the method proposed in this thesis for automatic categorisation of 3D facial morphology has a considerable potential for application to GWAS
    corecore