12 research outputs found

    Beyond Identity: What Information Is Stored in Biometric Face Templates?

    Full text link
    Deeply-learned face representations enable the success of current face recognition systems. Despite the ability of these representations to encode the identity of an individual, recent works have shown that more information is stored within, such as demographics, image characteristics, and social traits. This threatens the user's privacy, since for many applications these templates are expected to be solely used for recognition purposes. Knowing the encoded information in face templates helps to develop bias-mitigating and privacy-preserving face recognition technologies. This work aims to support the development of these two branches by analysing face templates regarding 113 attributes. Experiments were conducted on two publicly available face embeddings. For evaluating the predictability of the attributes, we trained a massive attribute classifier that is additionally able to accurately state its prediction confidence. This allows us to make more sophisticated statements about the attribute predictability. The results demonstrate that up to 74 attributes can be accurately predicted from face templates. Especially non-permanent attributes, such as age, hairstyles, haircolors, beards, and various accessories, found to be easily-predictable. Since face recognition systems aim to be robust against these variations, future research might build on this work to develop more understandable privacy preserving solutions and build robust and fair face templates.Comment: To appear in IJCB 202

    Deepfake detection: humans vs. machines

    Full text link
    Deepfake videos, where a person's face is automatically swapped with a face of someone else, are becoming easier to generate with more realistic results. In response to the threat such manipulations can pose to our trust in video evidence, several large datasets of deepfake videos and many methods to detect them were proposed recently. However, it is still unclear how realistic deepfake videos are for an average person and whether the algorithms are significantly better than humans at detecting them. In this paper, we present a subjective study conducted in a crowdsourcing-like scenario, which systematically evaluates how hard it is for humans to see if the video is deepfake or not. For the evaluation, we used 120 different videos (60 deepfakes and 60 originals) manually pre-selected from the Facebook deepfake database, which was provided in the Kaggle's Deepfake Detection Challenge 2020. For each video, a simple question: "Is face of the person in the video real of fake?" was answered on average by 19 na\"ive subjects. The results of the subjective evaluation were compared with the performance of two different state of the art deepfake detection methods, based on Xception and EfficientNets (B4 variant) neural networks, which were pre-trained on two other large public databases: the Google's subset from FaceForensics++ and the recent Celeb-DF dataset. The evaluation demonstrates that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes. Specifically, algorithms struggle to detect those deepfake videos, which human subjects found to be very easy to spot

    Demographic Bias in Presentation Attack Detection of Iris Recognition Systems

    Full text link
    With the widespread use of biometric systems, the demographic bias problem raises more attention. Although many studies addressed bias issues in biometric verification, there are no works that analyze the bias in presentation attack detection (PAD) decisions. Hence, we investigate and analyze the demographic bias in iris PAD algorithms in this paper. To enable a clear discussion, we adapt the notions of differential performance and differential outcome to the PAD problem. We study the bias in iris PAD using three baselines (hand-crafted, transfer-learning, and training from scratch) using the NDCLD-2013 database. The experimental results point out that female users will be significantly less protected by the PAD, in comparison to males.Comment: accepted for publication at EUSIPCO202

    SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness

    Full text link
    Face image quality is an important factor to enable high performance face recognition systems. Face quality assessment aims at estimating the suitability of a face image for recognition. Previous work proposed supervised solutions that require artificially or human labelled quality values. However, both labelling mechanisms are error-prone as they do not rely on a clear definition of quality and may not know the best characteristics for the utilized face recognition system. Avoiding the use of inaccurate quality labels, we proposed a novel concept to measure face quality based on an arbitrary face recognition model. By determining the embedding variations generated from random subnetworks of a face model, the robustness of a sample representation and thus, its quality is estimated. The experiments are conducted in a cross-database evaluation setting on three publicly available databases. We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry. The results show that our unsupervised solution outperforms all other approaches in the majority of the investigated scenarios. In contrast to previous works, the proposed solution shows a stable performance over all scenarios. Utilizing the deployed face recognition model for our face quality assessment methodology avoids the training phase completely and further outperforms all baseline approaches by a large margin. Our solution can be easily integrated into current face recognition systems and can be modified to other tasks beyond face recognition.Comment: Accepted at CVPR202

    Fingerprint presentation attack detection utilizing spatio-temporal features

    Get PDF
    This article belongs to the Special Issue Biometric Sensing.This paper presents a novel mechanism for fingerprint dynamic presentation attack detec-tion. We utilize five spatio-temporal feature extractors to efficiently eliminate and mitigate different presentation attack species. The feature extractors are selected such that the fingerprint ridge/valley pattern is consolidated with the temporal variations within the pattern in fingerprint videos. An SVM classification scheme, with a second degree polynomial kernel, is used in our presentation attack detection subsystem to classify bona fide and attack presentations. The experiment protocol and evaluation are conducted following the ISO/IEC 30107-3:2017 standard. Our proposed approach demonstrates efficient capability of detecting presentation attacks with significantly low BPCER where BPCER is 1.11% for an optical sensor and 3.89% for a thermal sensor at 5% APCER for both.This work was supported by the European Union's Horizon 2020 for Research and Innovation Program under Grant 675087 (AMBER)

    Vulnerability of Automatic Identity Recognition to Audio-Visual Deepfakes

    Full text link
    The task of deepfakes detection is far from being solved by speech or vision researchers. Several publicly available databases of fake synthetic video and speech were built to aid the development of detection methods. However, existing databases typically focus on visual or voice modalities and provide no proof that their deepfakes can in fact impersonate any real person. In this paper, we present the first realistic audio-visual database of deepfakes SWAN-DF, where lips and speech are well synchronized and video have high visual and audio qualities. We took the publicly available SWAN dataset of real videos with different identities to create audio-visual deepfakes using several models from DeepFaceLab and blending techniques for face swapping and HiFiVC, DiffVC, YourTTS, and FreeVC models for voice conversion. From the publicly available speech dataset LibriTTS, we also created a separate database of only audio deepfakes LibriTTS-DF using several latest text to speech methods: YourTTS, Adaspeech, and TorToiSe. We demonstrate the vulnerability of a state of the art speaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain, to the synthetic voices. Similarly, we tested face recognition system based on the MobileFaceNet architecture to several variants of our visual deepfakes. The vulnerability assessment show that by tuning the existing pretrained deepfake models to specific identities, one can successfully spoof the face and speaker recognition systems in more than 90% of the time and achieve a very realistic looking and sounding fake video of a given person.Comment: 10 pages, 3 figures, 3 table

    Face Quality Estimation and Its Correlation to Demographic and Non-Demographic Bias in Face Recognition

    Full text link
    Face quality assessment aims at estimating the utility of a face image for the purpose of recognition. It is a key factor to achieve high face recognition performances. Currently, the high performance of these face recognition systems come with the cost of a strong bias against demographic and non-demographic sub-groups. Recent work has shown that face quality assessment algorithms should adapt to the deployed face recognition system, in order to achieve highly accurate and robust quality estimations. However, this could lead to a bias transfer towards the face quality assessment leading to discriminatory effects e.g. during enrolment. In this work, we present an in-depth analysis of the correlation between bias in face recognition and face quality assessment. Experiments were conducted on two publicly available datasets captured under controlled and uncontrolled circumstances with two popular face embeddings. We evaluated four state-of-the-art solutions for face quality assessment towards biases to pose, ethnicity, and age. The experiments showed that the face quality assessment solutions assign significantly lower quality values towards subgroups affected by the recognition bias demonstrating that these approaches are biased as well. This raises ethical questions towards fairness and discrimination which future works have to address.Comment: Accepted at IJCB202

    The Impact of Pressure on the Fingerprint Impression: Presentation Attack Detection Scheme

    Get PDF
    This article belongs to the Special Issue Biometric Identification Systems: Recent Advances and Future Directions.Fingerprint recognition systems have been widely deployed in authentication and verification applications, ranging from personal smartphones to border control systems. Recently, the biometric society has raised concerns about presentation attacks that aim to manipulate the biometric system’s final decision by presenting artificial fingerprint traits to the sensor. In this paper, we propose a presentation attack detection scheme that exploits the natural fingerprint phenomena, and analyzes the dynamic variation of a fingerprint’s impression when the user applies additional pressure during the presentation. For that purpose, we collected a novel dynamic dataset with an instructed acquisition scenario. Two sensing technologies are used in the data collection, thermal and optical. Additionally, we collected attack presentations using seven presentation attack instrument species considering the same acquisition circumstances. The proposed mechanism is evaluated following the directives of the standard ISO/IEC 30107. The comparison between ordinary and pressure presentations shows higher accuracy and generalizability for the latter. The proposed approach demonstrates efficient capability of detecting presentation attacks with low bona fide presentation classification error rate (BPCER) where BPCER is 0% for an optical sensor and 1.66% for a thermal sensor at 5% attack presentation classification error rate (APCER) for both.This work was supported by the European Union’s Horizon 2020 for Research and Innovation Program under Grant 675087 (AMBER).Publicad
    corecore