10 research outputs found

    Gender Privacy Angular Constraints for Face Recognition

    Get PDF
    Deep learning-based face recognition systems produce templates that encode sensitive information next to identity, such as gender and ethnicity. This poses legal and ethical problems as the collection of biometric data should be minimized and only specific to a designated task. We propose two privacy constraints to hide the gender attribute that can be added to a recognition loss. The first constraint relies on the minimization of the angle between gender-centroid embeddings. The second constraint relies on the minimization of the angle between gender specific embeddings and their opposing gender-centroid weight vectors. Both constraints enforce the overlapping of the gender specific distributions of the embeddings. Furthermore, they have a direct interpretation in the embedding space and do not require a large number of trainable parameters as two fully connected layers are sufficient to achieve satisfactory results. We also provide extensive evaluation results across several datasets and face recognition networks, and we compare our method to three state-of-the-art methods. Our method is capable of maintaining high verification performances while significantly improving privacy in a cross-database setting, without increasing the computational load for template comparison. We also show that different training data can result in varying levels of effectiveness of privacy-enhancing methods that implement data minimization

    Toward Face Biometric De-identification using Adversarial Examples

    Get PDF
    The remarkable success of face recognition (FR) has endangered the privacy of internet users particularly in social media. Recently, researchers turned to use adversarial examples as a countermeasure to privacy attacks. In this paper, we assess the effectiveness of using two widely known adversarial methods (BIM and ILLC) for de-identifying personal images. We discovered, unlike previous claims in the literature, that it is not easy to get a high protection success rate (suppressing identification rate) with imperceptible adversarial perturbation to the human visual system. Finally, we found out that the transferability of adversarial examples is highly affected by the training parameters of the network with which they are generated

    Transferability Analysis of an Adversarial Attack on Gender Classification to Face Recognition

    Get PDF
    Modern biometric systems establish their decision based on the outcome of machine learning (ML) classifiers trained to make accurate predictions. Such classifiers are vulnerable to diverse adversarial attacks, altering the classifiers’ predictions by adding a crafted perturbation. According to ML literature, those attacks are transferable among models that perform the same task. However, models performing different tasks, but sharing the same input space and the same model architecture, were never included in transferability scenarios. In this paper, we analyze this phenomenon for the special case of VGG16-based biometric classifiers. Concretely, we study the effect of the white-box FGSM attack, on a gender classifier and compare several defense methods as countermeasures. Then, in a black-box manner, we attack a pre-trained face recognition classifier using adversarial images generated by the FGSM. Our experiments show that this attack is transferable from a gender classifier to a face recognition classifier where both were independently trained

    Enhancing Soft Biometric Face Template Privacy with Mutual Information-Based Image Attacks

    No full text
    The features learned by deep-learning based face recognition networks pose privacy risks as they encode sensitive information that could be used to infer demographic attributes. In this paper, we propose an image-based solution that enhances the soft biometric privacy of the templates generated by face recognition networks. The method uses a reliable mutual information estimation and simulates a minimization step of the mutual information between the features and the target variable. We comprehensively assess the effectiveness of our approach on the gender classification task by formulating two distinct evaluation settings: one for evaluating the performance of the approach's ability to fool a given gender classifier and another for evaluating its ability to hinder the separability of the gender distributions. We conduct an extensive analysis, considering varying levels of perturbation. We show the potential of our method as a privacy-enhancing method that preserves the verification performance as well as a strong single-step adversarial attack

    Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation

    Get PDF
    Most deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility

    Describing Gender Equality in French Audiovisual Streams with a Deep Learning Approach

    No full text
    A large-scale description of men and women speaking-time in media is presented, based on the analysis of about 700.000 hours of French audiovisual documents, broadcasted from 2001 to 2018 on 22 TV channels and 21 radio stations. Speaking-time is described using Women Speaking Time Percentage (WSTP), which is estimated using automatic speaker gender detection algorithms, based on acoustic machine learning models. WSTP variations are presented across channels, years, hours, and regions. Results show that men speak twice as much as women on TV and on radio in 2018, and that they used to speak three times longer than women in 2004. We also show only one radio station out of the 43 channels considered is associated to a WSTP larger than 50%. Lastly, we show that WSTP is lower during high-audience time-slots on private channels. This work constitutes a massive gender equality study based on the automatic analysis of audiovisual material and offers concrete perspectives for monitoring gender equality in media.The software used for the analysis has been released in open-source, and the detailed results obtained have been released in open-data

    Template Recovery Attack on Homomorphically Encrypted Biometric Recognition Systems with Unprotected Threshold Comparison

    No full text
    Privacy-preserving biometric template protection schemes (BTPs) preserve biometric data by hiding biometric representations via a privacy-preserving mechanism (such as homomorphic encryption) and comparing the protected templates while conserving the recognition scores as in an embedding space. However, it is often tolerated to reveal these scores after performing a biometric comparison to gain efficiency and perform the score comparison directly on cleartext data. Through this work, we demonstrate that this cleartext score tolerance can lead to privacy breaches and bypass recognition systems, threatening those BTPs in the case of inner product-based facial template comparisons. We propose a template recovery attack that requires no training and a few random fake templates with their corresponding scores, from which we are able to recover the unprotected target template using the Lagrange multiplier optimization method. We evaluate our attack by verifying whether the recovered template is deemed similar to the target template held by recognition systems set to accept 0.1%, 0.01%, and 0.001% FMR. We estimate that between 60 to 165 revealed scores and fake templates can lead to a template recovery with a 100% success rate. We analyzed the impact of recovered templates by measuring the amount of gender information they contain, as well as their resemblance to the reconstructed images of their target templates
    corecore