690 research outputs found

    The effects of scarring on face recognition

    Get PDF
    The focus of this research is the effects of scarring on face recognition. Face recognition is a common biometric modality implemented for access control operations such as customs and borders. The recent report from the Special Group on Issues Affecting Facial Recognition and Best Practices for their Mitigation highlighted scarring as one of the emerging challenges. The significance of this problem extends to the ISO/IEC and national agencies are researching to enhance their intelligence capabilities. Data was collected on face images with and without scars, using theatrical special effects to simulate scarring on the face and also from subjects that have developed scarring within their lifetime. A total of 60 subjects participated in this data collection, 30 without scarring of any kind and 30 with preexisting scars. Controlled data on scarring is problematic for face recognition research as scarring has various manifestations among individuals, yet is universal in that all individuals will manifest some degree of scarring. Effect analysis was done with controlled scarring to observe the factor alone, and wild scarring that is encountered during operations for realistic contextualization. Two environments were included in this study, a controlled studio that represented an ideal face capture setting and a mock border control booth simulating an operational use case

    Facial Makeup Detection Using HSV Color Space and Texture Analysis

    Get PDF
    Facial Makeup Detection Using HSV Color Space and Texture Analysis In recent decades, 2D and 3D face analyses in digital systems have become increasingly important because of their vast applications in security systems or any digital systems that interact with humans. In fact the human face expresses many of the individual’s characteristics such as gender, ethnicity, emotion, age, beauty and health. Makeup is one of the common techniques used by people to alter the appearance of their faces. Analyzing face beauty by computer is essential to aestheticians and computer scientists. The objective of this research is to detect makeup on images of human faces by image processing and pattern recognition techniques. Detecting changes of face, caused by cosmetics such as eye-shadow, lipstick and liquid foundation, are the targets of this study. Having a proper facial database that consists of the information related to makeup is necessary. Collecting the first facial makeup database was a valuable achievement for this research. This database consists of almost 1290 frontal pictures from 21 individuals before and after makeup. Along with the images, meta data such as ethnicity, country of origin, smoking habits, drinking habits, age, and job is provided. The uniqueness of this database stems from, first being the only database that has images of women both before and after makeup, and second because of having light-source from different angles as well as its meta data collected during the process. Selecting the best features that lead to the best classification result is a challenging issue, since any variation in the head pose, lighting conditions and face orientation can add complexity to a proper evaluation of whether any makeup has been applied or not. In addition, the similarity of cosmetic’s color to the skin color adds another level of difficulty. In this effort, by choosing the best possible features, related to edge information, color specification and texture characteristics this problem was addressed. Because hue and saturation and intensity can be studied separately in HSV (Hue, Saturation, and Value) color space, it is selected for this application. The proposed technique is tested on 120 selected images from our new database. A supervised learning model called SVM (Support Vector Machine) classifier is used and the accuracy obtained is 90.62% for eye-shadow detection, 93.33% for lip-stick and 52.5% for liquid foundation detection respectively. A main highlight of this technique is to specify where makeup has been applied on the face, which can be used to identify the proper makeup style for the individual. This application will be a great improvement in the aesthetic field, through which aestheticians can facilitate their work by identifying the type of makeup appropriate for each person and giving the proper suggestions to the person involved by reducing the number of trials

    Impact and Detection of Facial Beautification in Face Recognition: An Overview

    Get PDF
    International audienceFacial beautification induced by plastic surgery, cosmetics or retouching has the ability to substantially alter the appearance of face images. Such types of beautification can negatively affect the accuracy of face recognition systems. In this work, a conceptual categorisation of beautification is presented, relevant scenarios with respect to face recognition are discussed, and related publications are revisited. Additionally, technical considerations and trade-offs of the surveyed methods are summarized along with open issues and challenges in the field. This survey is targeted to provide a comprehensive point of reference for biometric researchers and practitioners working in the field of face recognition, who aim at tackling challenges caused by facial beautification

    Hyperbolic Face Anti-Spoofing

    Full text link
    Learning generalized face anti-spoofing (FAS) models against presentation attacks is essential for the security of face recognition systems. Previous FAS methods usually encourage models to extract discriminative features, of which the distances within the same class (bonafide or attack) are pushed close while those between bonafide and attack are pulled away. However, these methods are designed based on Euclidean distance, which lacks generalization ability for unseen attack detection due to poor hierarchy embedding ability. According to the evidence that different spoofing attacks are intrinsically hierarchical, we propose to learn richer hierarchical and discriminative spoofing cues in hyperbolic space. Specifically, for unimodal FAS learning, the feature embeddings are projected into the Poincar\'e ball, and then the hyperbolic binary logistic regression layer is cascaded for classification. To further improve generalization, we conduct hyperbolic contrastive learning for the bonafide only while relaxing the constraints on diverse spoofing attacks. To alleviate the vanishing gradient problem in hyperbolic space, a new feature clipping method is proposed to enhance the training stability of hyperbolic models. Besides, we further design a multimodal FAS framework with Euclidean multimodal feature decomposition and hyperbolic multimodal feature fusion & classification. Extensive experiments on three benchmark datasets (i.e., WMCA, PADISI-Face, and SiW-M) with diverse attack types demonstrate that the proposed method can bring significant improvement compared to the Euclidean baselines on unseen attack detection. In addition, the proposed framework is also generalized well on four benchmark datasets (i.e., MSU-MFSD, IDIAP REPLAY-ATTACK, CASIA-FASD, and OULU-NPU) with a limited number of attack types

    Style transfer for headshot portraits

    Get PDF
    Headshot portraits are a popular subject in photography but to achieve a compelling visual style requires advanced skills that a casual photographer will not have. Further, algorithms that automate or assist the stylization of generic photographs do not perform well on headshots due to the feature-specific, local retouching that a professional photographer typically applies to generate such portraits. We introduce a technique to transfer the style of an example headshot photo onto a new one. This can allow one to easily reproduce the look of renowned artists. At the core of our approach is a new multiscale technique to robustly transfer the local statistics of an example portrait onto a new one. This technique matches properties such as the local contrast and the overall lighting direction while being tolerant to the unavoidable differences between the faces of two different people. Additionally, because artists sometimes produce entire headshot collections in a common style, we show how to automatically find a good example to use as a reference for a given portrait, enabling style transfer without the user having to search for a suitable example for each input. We demonstrate our approach on data taken in a controlled environment as well as on a large set of photos downloaded from the Internet. We show that we can successfully handle styles by a variety of different artists.Quanta Computer (Firm)Adobe System

    Facial Beauty Prediction and Analysis based on Deep Convolutional Neural Network: A Review

    Get PDF
    Abstract: Facial attractiveness or facial beauty prediction (FBP) is a current study that has several potential usages. It is a key difficulty area in the computer vision domain because of the few public databases related to FBP and its experimental trials on the minor-scale database. Moreover, the evaluation of facial beauty is personalized in nature, with people having personalized favor of beauty. Deep learning techniques have displayed a significant ability in terms of analysis and feature representation. The previous studies focussed on scattered portions of facial beauty with fewer comparisons between diverse techniques. Thus, this article reviewed the recent research on computer prediction and analysis of face beauty based on deep convolution neural network DCNN. Furthermore, the provided possible lines of research and challenges in this article can help researchers in advancing the state – of- art in future work

    3D Human Face Reconstruction and 2D Appearance Synthesis

    Get PDF
    3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store. In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs. In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption. As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach. In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses. We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results. We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn\u27t need special hardware and run in real-time with satisfying results

    UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios

    Full text link
    Recently, ocular biometrics in unconstrained environments using images obtained at visible wavelength have gained the researchers' attention, especially with images captured by mobile devices. Periocular recognition has been demonstrated to be an alternative when the iris trait is not available due to occlusions or low image resolution. However, the periocular trait does not have the high uniqueness presented in the iris trait. Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region. Also, to address the within-class variability caused by lighting and attributes in the periocular region, it is of paramount importance to use datasets with images of the same subject captured in distinct sessions. As the datasets available in the literature do not present all these factors, in this work, we present a new periocular dataset containing samples from 1,122 subjects, acquired in 3 sessions by 196 different mobile devices. The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest. We also performed an extensive benchmark with several Convolutional Neural Network (CNN) architectures and models that have been employed in state-of-the-art approaches based on Multi-class Classification, Multitask Learning, Pairwise Filters Network, and Siamese Network. The results achieved in the closed- and open-world protocol, considering the identification and verification tasks, show that this area still needs research and development

    Towards Decrypting Attractiveness via Multi-Modality Cue

    Get PDF
    Decrypting the secret of beauty or attractiveness has been the pursuit of artists and philosophers for centuries. To date, the computational model for attractiveness estimation has been actively explored in the computer vision and multimedia community, yet with the focus mainly on facial features. In this article, we conduct a comprehensive study on female attractiveness conveyed by single/multiple modalities of cues, that is, face, dressing and/or voice; the aim is to discover how different modalities individually and collectively affect the human sense of beauty. To extensively investigate the problem, we collect the Multi-Modality Beauty (M2B) dataset, which is annotated with attractiveness levels converted from manual k-wise ratings and semantic attributes of different modalities. Inspired by the common consensus that middle-level attribute prediction can assist higher-level computer vision tasks, we manually labeled many attributes for each modality. Next, a tri-layer Dual-supervised Feature-Attribute-Task (DFAT) network is proposed to jointly learn the attribute model and attractiveness model of single/multiple modalities. To remedy possible loss of information caused by incomplete manual attributes, we also propose a novel Latent Dual-supervised Feature-Attribute-Task (LDFAT) network, where latent attributes are combined with manual attributes to contribute to the final attractiveness estimation. The extensive experimental evaluations on the collected M2B dataset well demonstrate the effectiveness of the proposed DFAT and LDFAT networks for female attractiveness prediction
    • …
    corecore