2,354 research outputs found

    Epistemic Uncertainty-Weighted Loss for Visual Bias Mitigation

    Get PDF
    Deep neural networks are highly susceptible to learning biases in visual data. While various methods have been proposed to mitigate such bias, the majority require explicit knowledge of the biases present in the training data in order to mitigate. We argue the relevance of exploring methods which are completely ignorant of the presence of any bias, but are capable of identifying and mitigating them. Furthermore, we propose using Bayesian neural networks with an epistemic uncertainty-weighted loss function to dynamically identify potential bias in individual training samples and to weight them during training. We find a positive correlation between samples subject to bias and higher epistemic uncertainties. Finally, we show the method has potential to mitigate visual bias on a bias benchmark dataset and on a real-world face detection problem, and we consider the merits and weaknesses of our approach.Comment: To be published in 2022 IEEE CVPR Workshop on Fair, Data Efficient and Trusted Computer Visio

    MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face Recognition

    Full text link
    Although significant progress has been made in face recognition, demographic bias still exists in face recognition systems. For instance, it usually happens that the face recognition performance for a certain demographic group is lower than the others. In this paper, we propose MixFairFace framework to improve the fairness in face recognition models. First of all, we argue that the commonly used attribute-based fairness metric is not appropriate for face recognition. A face recognition system can only be considered fair while every person has a close performance. Hence, we propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches. Different from previous approaches that require sensitive attribute labels such as race and gender for reducing the demographic bias, we aim at addressing the identity bias in face representation, i.e., the performance inconsistency between different identities, without the need for sensitive attribute labels. To this end, we propose MixFair Adapter to determine and reduce the identity bias of training samples. Our extensive experiments demonstrate that our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.Comment: Accepted in AAAI-23; Code: https://github.com/fuenwang/MixFairFac

    Gradient Attention Balance Network: Mitigating Face Recognition Racial Bias via Gradient Attention

    Full text link
    Although face recognition has made impressive progress in recent years, we ignore the racial bias of the recognition system when we pursue a high level of accuracy. Previous work found that for different races, face recognition networks focus on different facial regions, and the sensitive regions of darker-skinned people are much smaller. Based on this discovery, we propose a new de-bias method based on gradient attention, called Gradient Attention Balance Network (GABN). Specifically, we use the gradient attention map (GAM) of the face recognition network to track the sensitive facial regions and make the GAMs of different races tend to be consistent through adversarial learning. This method mitigates the bias by making the network focus on similar facial regions. In addition, we also use masks to erase the Top-N sensitive facial regions, forcing the network to allocate its attention to a larger facial region. This method expands the sensitive region of darker-skinned people and further reduces the gap between GAM of darker-skinned people and GAM of Caucasians. Extensive experiments show that GABN successfully mitigates racial bias in face recognition and learns more balanced performance for people of different races.Comment: Accepted by CVPR 2023 worksho

    Post-Comparison Mitigation of Demographic Bias in Face Recognition Using Fair Score Normalization

    Full text link
    Current face recognition systems achieve high progress on several benchmark tests. Despite this progress, recent works showed that these systems are strongly biased against demographic sub-groups. Consequently, an easily integrable solution is needed to reduce the discriminatory effect of these biased systems. Previous work mainly focused on learning less biased face representations, which comes at the cost of a strongly degraded overall recognition performance. In this work, we propose a novel unsupervised fair score normalization approach that is specifically designed to reduce the effect of bias in face recognition and subsequently lead to a significant overall performance boost. Our hypothesis is built on the notation of individual fairness by designing a normalization approach that leads to treating similar individuals similarly. Experiments were conducted on three publicly available datasets captured under controlled and in-the-wild circumstances. Results demonstrate that our solution reduces demographic biases, e.g. by up to 82.7% in the case when gender is considered. Moreover, it mitigates the bias more consistently than existing works. In contrast to previous works, our fair normalization approach enhances the overall performance by up to 53.2% at false match rate of 0.001 and up to 82.9% at a false match rate of 0.00001. Additionally, it is easily integrable into existing recognition systems and not limited to face biometrics.Comment: Accepted in Pattern Recognition Letter

    Proposing a Roadmap for Designing Non-Discriminatory ML Services: Preliminary Results from a Design Science Research Project

    Get PDF
    Artificial Intelligence (AI) and Machine Learning (ML) algorithms are being developed with ever higher accuracy. However, the use of ML also has its dark side. In the recent past, examples have repeatedly emerged of ML systems learning discriminatory and even racist or sexist patterns and acting accordingly. As ML systems become an integral part of both private and economic spheres of life, academia and practice must address the question of how non-discriminatory ML algorithms can be developed to benefit everyone. This is where our research in progress paper contributes. Using a real-world smart living case study, we investigated discrimination in terms of ethnicity and gender within state-of-the-art pre-trained ML models for face recognition and quantified it using an F1 metric. Building on these empirical findings as well as on the state of the scientific literature, we propose a roadmap for further research on the development of non-discriminatory ML services
    • …
    corecore