2,768 research outputs found

    Gradient Attention Balance Network: Mitigating Face Recognition Racial Bias via Gradient Attention

    Full text link
    Although face recognition has made impressive progress in recent years, we ignore the racial bias of the recognition system when we pursue a high level of accuracy. Previous work found that for different races, face recognition networks focus on different facial regions, and the sensitive regions of darker-skinned people are much smaller. Based on this discovery, we propose a new de-bias method based on gradient attention, called Gradient Attention Balance Network (GABN). Specifically, we use the gradient attention map (GAM) of the face recognition network to track the sensitive facial regions and make the GAMs of different races tend to be consistent through adversarial learning. This method mitigates the bias by making the network focus on similar facial regions. In addition, we also use masks to erase the Top-N sensitive facial regions, forcing the network to allocate its attention to a larger facial region. This method expands the sensitive region of darker-skinned people and further reduces the gap between GAM of darker-skinned people and GAM of Caucasians. Extensive experiments show that GABN successfully mitigates racial bias in face recognition and learns more balanced performance for people of different races.Comment: Accepted by CVPR 2023 worksho

    Balancing Biases and Preserving Privacy on Balanced Faces in the Wild

    Full text link
    Demographic biases exist in current models used for facial recognition (FR). Our Balanced Faces in the Wild (BFW) dataset is a proxy to measure bias across ethnicity and gender subgroups, allowing one to characterize FR performances per subgroup. We show that results are non-optimal when a single score threshold determines whether sample pairs are genuine or imposters. Furthermore, within subgroups, performance often varies significantly from the global average. Thus, specific error rates only hold for populations matching the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted from state-of-the-art neural networks, boosting the average performance. The proposed method also preserves identity information while removing demographic knowledge. The removal of demographic knowledge prevents potential biases from being injected into decision-making and protects privacy since demographic information is no longer available. We explore the proposed method and show that subgroup classifiers can no longer learn from the features projected using our domain adaptation scheme. For source code and data, see https://github.com/visionjo/facerec-bias-bfw.Comment: arXiv admin note: text overlap with arXiv:2102.0894

    Improving fairness in machine learning systems: What do industry practitioners need?

    Full text link
    The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019

    Survey of Social Bias in Vision-Language Models

    Full text link
    In recent years, the rapid advancement of machine learning (ML) models, particularly transformer-based pre-trained models, has revolutionized Natural Language Processing (NLP) and Computer Vision (CV) fields. However, researchers have discovered that these models can inadvertently capture and reinforce social biases present in their training datasets, leading to potential social harms, such as uneven resource allocation and unfair representation of specific social groups. Addressing these biases and ensuring fairness in artificial intelligence (AI) systems has become a critical concern in the ML community. The recent introduction of pre-trained vision-and-language (VL) models in the emerging multimodal field demands attention to the potential social biases present in these models as well. Although VL models are susceptible to social bias, there is a limited understanding compared to the extensive discussions on bias in NLP and CV. This survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL. By examining these perspectives, the survey aims to offer valuable guidelines on how to approach and mitigate social bias in both unimodal and multimodal settings. The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models in various applications and research endeavors

    KFC: Kinship Verification with Fair Contrastive Loss and Multi-Task Learning

    Full text link
    Kinship verification is an emerging task in computer vision with multiple potential applications. However, there's no large enough kinship dataset to train a representative and robust model, which is a limitation for achieving better performance. Moreover, face verification is known to exhibit bias, which has not been dealt with by previous kinship verification works and sometimes even results in serious issues. So we first combine existing kinship datasets and label each identity with the correct race in order to take race information into consideration and provide a larger and complete dataset, called KinRace dataset. Secondly, we propose a multi-task learning model structure with attention module to enhance accuracy, which surpasses state-of-the-art performance. Lastly, our fairness-aware contrastive loss function with adversarial learning greatly mitigates racial bias. We introduce a debias term into traditional contrastive loss and implement gradient reverse in race classification task, which is an innovative idea to mix two fairness methods to alleviate bias. Exhaustive experimental evaluation demonstrates the effectiveness and superior performance of the proposed KFC in both standard deviation and accuracy at the same time.Comment: Accepted by BMVC 202

    MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face Recognition

    Full text link
    Although significant progress has been made in face recognition, demographic bias still exists in face recognition systems. For instance, it usually happens that the face recognition performance for a certain demographic group is lower than the others. In this paper, we propose MixFairFace framework to improve the fairness in face recognition models. First of all, we argue that the commonly used attribute-based fairness metric is not appropriate for face recognition. A face recognition system can only be considered fair while every person has a close performance. Hence, we propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches. Different from previous approaches that require sensitive attribute labels such as race and gender for reducing the demographic bias, we aim at addressing the identity bias in face representation, i.e., the performance inconsistency between different identities, without the need for sensitive attribute labels. To this end, we propose MixFair Adapter to determine and reduce the identity bias of training samples. Our extensive experiments demonstrate that our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.Comment: Accepted in AAAI-23; Code: https://github.com/fuenwang/MixFairFac

    Partition-and-Debias: Agnostic Biases Mitigation via A Mixture of Biases-Specific Experts

    Full text link
    Bias mitigation in image classification has been widely researched, and existing methods have yielded notable results. However, most of these methods implicitly assume that a given image contains only one type of known or unknown bias, failing to consider the complexities of real-world biases. We introduce a more challenging scenario, agnostic biases mitigation, aiming at bias removal regardless of whether the type of bias or the number of types is unknown in the datasets. To address this difficult task, we present the Partition-and-Debias (PnD) method that uses a mixture of biases-specific experts to implicitly divide the bias space into multiple subspaces and a gating module to find a consensus among experts to achieve debiased classification. Experiments on both public and constructed benchmarks demonstrated the efficacy of the PnD. Code is available at: https://github.com/Jiaxuan-Li/PnD.Comment: ICCV 202

    Bias in Deep Learning and Applications to Face Analysis

    Get PDF
    Deep learning has fostered the progress in the field of face analysis, resulting in the integration of these models in multiple aspects of society. Even though the majority of research has focused on optimizing standard evaluation metrics, recent work has exposed the bias of such algorithms as well as the dangers of their unaccountable utilization.n this thesis, we explore the bias of deep learning models in the discriminative and the generative setting. We begin by investigating the bias of face analysis models with regards to different demographics. To this end, we collect KANFace, a large-scale video and image dataset of faces captured ``in-the-wild’'. The rich set of annotations allows us to expose the demographic bias of deep learning models, which we mitigate by utilizing adversarial learning to debias the deep representations. Furthermore, we explore neural augmentation as a strategy towards training fair classifiers. We propose a style-based multi-attribute transfer framework that is able to synthesize photo-realistic faces of the underrepresented demographics. This is achieved by introducing a multi-attribute extension to Adaptive Instance Normalisation that captures the multiplicative interactions between the representations of different attributes. Focusing on bias in gender recognition, we showcase the efficacy of the framework in training classifiers that are more fair compared to generative and fairness-aware methods.In the second part, we focus on bias in deep generative models. In particular, we start by studying the generalization of generative models on images of unseen attribute combinations. To this end, we extend the conditional Variational Autoencoder by introducing a multilinear conditioning framework. The proposed method is able to synthesize unseen attribute combinations by modeling the multiplicative interactions between the attributes. Lastly, in order to control protected attributes, we investigate controlled image generation without training on a labelled dataset. We leverage pre-trained Generative Adversarial Networks that are trained in an unsupervised fashion and exploit the clustering that occurs in the representation space of intermediate layers of the generator. We show that these clusters capture semantic attribute information and condition image synthesis on the cluster assignment using Implicit Maximum Likelihood Estimation.Open Acces
    • …
    corecore