11 research outputs found

    GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks

    Full text link
    Facial landmarks constitute the most compressed representation of faces and are known to preserve information such as pose, gender and facial structure present in the faces. Several works exist that attempt to perform high-level face-related analysis tasks based on landmarks. In contrast, in this work, an attempt is made to tackle the inverse problem of synthesizing faces from their respective landmarks. The primary aim of this work is to demonstrate that information preserved by landmarks (gender in particular) can be further accentuated by leveraging generative models to synthesize corresponding faces. Though the problem is particularly challenging due to its ill-posed nature, we believe that successful synthesis will enable several applications such as boosting performance of high-level face related tasks using landmark points and performing dataset augmentation. To this end, a novel face-synthesis method known as Gender Preserving Generative Adversarial Network (GP-GAN) that is guided by adversarial loss, perceptual loss and a gender preserving loss is presented. Further, we propose a novel generator sub-network UDeNet for GP-GAN that leverages advantages of U-Net and DenseNet architectures. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed method.Comment: 6 pages, 5 figures, this paper is accepted as 2018 24th International Conference on Pattern Recognition (ICPR2018

    Show me your face and I will tell you your height, weight and body mass index

    Get PDF
    International audienceBody height, weight, as well as the associated and composite body mass index (BMI) are human attributes of pertinence due to their use in a number of applications including surveillance, re-identification, image retrieval systems, as well as healthcare. Previous work on automated estimation of height, weight and BMI has predominantly focused on 2D and 3D full-body images and videos. Little attention has been given to the use of face for estimating such traits. Motivated by the above, we here explore the possibility of estimating height, weight and BMI from single-shot facial images by proposing a regression method based on the 50-layers ResNet-architecture. In addition, we present a novel dataset consisting of 1026 subjects and show results, which suggest that facial images contain discriminatory information pertaining to height, weight and BMI, comparable to that of body-images and videos. Finally, we perform a gender-based analysis of the prediction of height, weight and BMI

    From clothing to identity; manual and automatic soft biometrics

    No full text
    Soft biometrics have increasingly attracted research interest and are often considered as major cues for identity, especially in the absence of valid traditional biometrics, as in surveillance. In everyday life, several incidents and forensic scenarios highlight the usefulness and capability of identity information that can be deduced from clothing. Semantic clothing attributes have recently been introduced as a new form of soft biometrics. Although clothing traits can be naturally described and compared by humans for operable and successful use, it is desirable to exploit computer-vision to enrich clothing descriptions with more objective and discriminative information. This allows automatic extraction and semantic description and comparison of visually detectable clothing traits in a manner similar to recognition by eyewitness statements. This study proposes a novel set of soft clothing attributes, described using small groups of high-level semantic labels, and automatically extracted using computer-vision techniques. In this way we can explore the capability of human attributes vis-a-vis those which are inferred automatically by computer-vision. Categorical and comparative soft clothing traits are derived and used for identification/re identification either to supplement soft body traits or to be used alone. The automatically- and manually-derived soft clothing biometrics are employed in challenging invariant person retrieval. The experimental results highlight promising potential for use in various applications

    Human metrology for person classification and recognition

    Get PDF
    Human metrological features generally refers to geometric measurements extracted from humans, such as height, chest circumference or foot length. Human metrology provides an important soft biometric that can be used in challenging situations, such as person classification and recognition at a distance, where hard biometric traits such as fingerprints and iris information cannot easily be acquired. In this work, we first study the question of predictability and correlation in human metrology. We show that partial or available measurements can be used to predict other missing measurements. We then investigate the use of human metrology for the prediction of other soft biometrics, viz. gender and weight. The experimental results based on our proposed copula-based model suggest that human body metrology contains enough information for reliable prediction of gender and weight. Also, the proposed copula-based technique is observed to reduce the impact of noise on prediction performance. We then study the question of whether face metrology can be exploited for reliable gender prediction. A new method based solely on metrological information from facial landmarks is developed. The performance of the proposed metrology-based method is compared with that of a state-of-the-art appearance-based method for gender classification. Results on several face databases show that the metrology-based approach resulted in comparable accuracy to that of the appearance-based method. Furthermore, we study the question of person recognition (classification and identification) via whole body metrology. Using CAESAR 1D database as baseline, we simulate intra-class variation with various noise models. The experimental results indicate that given enough number of features, our metrology-based recognition system can have promising performance that is comparable to several recent state-of-the-art recognition systems. We propose a non-parametric feature selection methodology, called adapted k-nearest neighbor estimator, which does not rely on intra-class distribution of the query set. This leads to improved results over other nearest neighbor estimators (as feature selection criteria) for moderate number of features. Finally we quantify the discrimination capability of human metrology, from both individuality and capacity perspectives. Generally, a biometric-based recognition technique relies on an assumption that the given biometric is unique to an individual. However, the validity of this assumption is not yet generally confirmed for most soft biometrics, such as human metrology. In this work, we first develop two schemes that can be used to quantify the individuality of a given soft-biometric system. Then, a Poisson channel model is proposed to analyze the recognition capacity of human metrology. Our study suggests that the performance of such a system depends more on the accuracy of the ground truth or training set

    Gender and Ethnicity Classification Using Partial Face in Biometric Applications

    Get PDF
    As the number of biometric applications increases, the use of non-ideal information such as images which are not strictly controlled, images taken covertly, or images where the main interest is partially occluded, also increases. Face images are a specific example of this. In these non-ideal instances, other information, such as gender and ethnicity, can be determined to narrow the search space and/or improve the recognition results. Some research exists for gender classification using partial-face images, but there is little research involving ethnic classifications on such images. Few datasets have had the ethnic diversity needed and sufficient subjects for each ethnicity to perform this evaluation. Research is also lacking on how gender and ethnicity classifications on partial face are impacted by age. If the extracted gender and ethnicity information is to be integrated into a larger system, some measure of the reliability of the extracted information is needed. This study will provide an analysis of gender and ethnicity classification on large datasets captured by non-researchers under day-to-day operations using texture, color, and shape features extracted from partial-face regions. This analysis will allow for a greater understanding of the limitations of various facial regions for gender and ethnicity classifications. These limitations will guide the integration of automatically extracted partial-face gender and ethnicity information with a biometric face application in order to improve recognition under non-ideal circumstances. Overall, the results from this work showed that reliable gender and ethnic classification can be achieved from partial face images. Different regions of the face hold varying amount of gender and ethnicity information. For machine classification, the upper face regions hold more ethnicity information while the lower face regions hold more gender information. All regions were impacted by age, but the eyes were impacted the most in texture and color. The shape of the nose changed more with respect to age than any of the other regions

    What else does your biometric data reveal? A survey on soft biometrics

    Get PDF
    International audienceRecent research has explored the possibility of extracting ancillary information from primary biometric traits, viz., face, fingerprints, hand geometry and iris. This ancillary information includes personal attributes such as gender, age, ethnicity, hair color, height, weight, etc. Such attributes are known as soft biometrics and have applications in surveillance and indexing biometric databases. These attributes can be used in a fusion framework to improve the matching accuracy of a primary biometric system (e.g., fusing face with gender information), or can be used to generate qualitative descriptions of an individual (e.g., "young Asian female with dark eyes and brown hair"). The latter is particularly useful in bridging the semantic gap between human and machine descriptions of biometric data. In this paper, we provide an overview of soft biometrics and discuss some of the techniques that have been proposed to extract them from image and video data. We also introduce a taxonomy for organizing and classifying soft biometric attributes, and enumerate the strengths and limitations of these attributes in the context of an operational biometric system. Finally, we discuss open research problems in this field. This survey is intended for researchers and practitioners in the field of biometrics

    Machine Learning Approaches to Human Body Shape Analysis

    Get PDF
    Soft biometrics, biomedical sciences, and many other fields of study pay particular attention to the study of the geometric description of the human body, and its variations. Although multiple contributions, the interest is particularly high given the non-rigid nature of the human body, capable of assuming different poses, and numerous shapes due to variable body composition. Unfortunately, a well-known costly requirement in data-driven machine learning, and particularly in the human-based analysis, is the availability of data, in the form of geometric information (body measurements) with related vision information (natural images, 3D mesh, etc.). We introduce a computer graphics framework able to generate thousands of synthetic human body meshes, representing a population of individuals with stratified information: gender, Body Fat Percentage (BFP), anthropometric measurements, and pose. This contribution permits an extensive analysis of different bodies in different poses, avoiding the demanding, and expensive acquisition process. We design a virtual environment able to take advantage of the generated bodies, to infer the body surface area (BSA) from a single view. The framework permits to simulate the acquisition process of newly introduced RGB-D devices disentangling different noise components (sensor noise, optical distortion, body part occlusions). Common geometric descriptors in soft biometric, as well as in biomedical sciences, are based on body measurements. Unfortunately, as we prove, these descriptors are not pose invariant, constraining the usability in controlled scenarios. We introduce a differential geometry approach assuming body pose variations as isometric transformations of the body surface, and body composition changes covariant to the body surface area. This setting permits the use of the Laplace-Beltrami operator on the 2D body manifold, describing the body with a compact, efficient, and pose invariant representation. We design a neural network architecture able to infer important body semantics from spectral descriptors, closing the gap between abstract spectral features, and traditional measurement-based indices. Studying the manifold of body shapes, we propose an innovative generative adversarial model able to learn the body shapes. The method permits to generate new bodies with unseen geometries as a walk on the latent space, constituting a significant advantage over traditional generative methods

    Deep Learning Based Face Image Synthesis

    Get PDF
    Face image synthesis is an important problem in the biometrics and computer vision communities due to its applications in law enforcement and entertainment. In this thesis, we develop novel deep neural network models and associated loss functions for two face image synthesis problems, namely thermal to visible face synthesis and visual attribute to face synthesis. In particular, for thermal to visible face synthesis, we propose a model which makes use of facial attributes to obtain better synthesis. We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery. A pre-trained attribute predictor network is used to extract attributes from the visible image. Then, a novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification. In addition, we propose another thermal to visible face synthesis method based on selfattention generative adversarial network (SAGAN) which allows efficient attention-guided image synthesis. Rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for cross-modal face verification. Regarding attribute to face image synthesis, we propose the Att2SK2Face model for face image synthesis from visual attributes via sketch. In this approach, we first synthesize facial sketch corresponding to the visual attributes and then generate the face image based on the synthesized sketch. The proposed framework is based on a combination of two different Generative Adversarial Networks (GANs) – (1) a sketch generator network which synthesizes realistic sketch from the input attributes, and (2) a face generator network which synthesizes facial images from the synthesized sketch images with the help of facial attributes. Finally, we propose another synthesis model, called Att2MFace, which can simultaneously synthesize multimodal faces from visual attributes without requiring paired data in different domains for training the network. We introduce a novel generator with multimodal stretch-out modules to simultaneously synthesize multimodal face images. Additionally, multimodal stretch-in modules are introduced in the discriminator which discriminate between real and fake images

    Characterization of normal facial features and their association with genes

    Get PDF
    ABSTRACT Background: Craniofacial morphology has been reported to be highly heritable, but little is known about which genetic variants influence normal facial variation in the general population. Aim: To identify facial variation and explore phenotype-genotype associations in a 15-year-old population (2514 females and 2233 males). Subjects and Methods: The subjects involved in this study were recruited from the Avon Longitudinal Study of Parents and Children (ALSPAC). Three-dimensional (3D) facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial soft tissue landmarks and one constructed mid-endocanthion point (men) were identified and their coordinates were recorded. The 3D facial images were registered using Procrustes analysis (with and without scaling). Principal Component Analysis (PCA) was then employed to identify independent groups ‘principal components, PCs’ of correlated landmark coordinates that represent key facial features contributing to normal facial variation. A novel surface-based method of facial averaging was employed to visualize facial variation. Facial parameters (distances, angles, and ratios) were also generated using facial landmarks. Sex prediction based on facial parameters was explored using discriminant function analysis. A discovery-phase genome-wide association analysis (GWAS) was carried out for 2,185 ALSPAC subjects and replication was undertaken in a further 1,622 ALSPAC individuals. Results: 14 (unscaled) and 17 (scaled) PCs were identified explaining 82% of the total variance in facial form and shape. 250 facial parameters were derived (90 distances, 118 angles, 42 ratios). 24 facial parameters were found to provide sex prediction efficiency of over 70%, 23 of these parameters are distances that describe variation in face height, nose width, and prominence of various facial structures. 54 distances associated with previous reported high heritability and the 14 (unscaled) PCs were included in the discovery-phase GWAS. Four genetic associations with the distances were identified in the discovery analysis, and one of these, the association between the common ‘intronic’ SNP (rs7559271) in PAX3 gene on chromosome (2) and the nasion to mid-endocanthion 3D distance (n-men) was replicated strongly (p = 4 x 10-7). PAX3 gene encodes a transcription factor that plays crucial role in fetal development including craniofacial bones. PAX3 contains two DNA-binding domains, a paired-box domain and a homeodomain. The protein made from PAX3 gene directs the activity of other genes that signal neural crest cells to form specialized tissues such as craniofacial bones. PAX3 different mutations may lead to non-functional PAX3 polypeptides and destroy the ability of the PAX3 proteins to bind to DNA and regulate the activity of other genes to form bones and other specific tissues. Conclusions: The variation in facial form and shape can be accurately quantified and visualized as a multidimensional statistical continuum with respect to the principal components. The derived PCs may be useful to identify and classify faces according to a scale of normality. A strong genetic association was identified between the common SNP (rs7559271) in PAX3 gene on chromosome (2) and the nasion to mid-endocanthion 3D distance (n-men). Variation in this distance leads to nasal bridge prominence
    corecore