1,383 research outputs found

    Gender Artifacts in Visual Datasets

    Full text link
    Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models. Many prior works have proposed methods for mitigating gender biases, often by attempting to remove gender expression information from images. To understand the feasibility and practicality of these approaches, we investigate what gender artifacts\textit{gender artifacts} exist within large-scale visual datasets. We define a gender artifact\textit{gender artifact} as a visual cue that is correlated with gender, focusing specifically on those cues that are learnable by a modern image classifier and have an interpretable human corollary. Through our analyses, we find that gender artifacts are ubiquitous in the COCO and OpenImages datasets, occurring everywhere from low-level information (e.g., the mean value of the color channels) to the higher-level composition of the image (e.g., pose and location of people). Given the prevalence of gender artifacts, we claim that attempts to remove gender artifacts from such datasets are largely infeasible. Instead, the responsibility lies with researchers and practitioners to be aware that the distribution of images within datasets is highly gendered and hence develop methods which are robust to these distributional shifts across groups.Comment: ICCV 202

    Balancing Biases and Preserving Privacy on Balanced Faces in the Wild

    Full text link
    Demographic biases exist in current models used for facial recognition (FR). Our Balanced Faces in the Wild (BFW) dataset is a proxy to measure bias across ethnicity and gender subgroups, allowing one to characterize FR performances per subgroup. We show that results are non-optimal when a single score threshold determines whether sample pairs are genuine or imposters. Furthermore, within subgroups, performance often varies significantly from the global average. Thus, specific error rates only hold for populations matching the validation data. We mitigate the imbalanced performances using a novel domain adaptation learning scheme on the facial features extracted from state-of-the-art neural networks, boosting the average performance. The proposed method also preserves identity information while removing demographic knowledge. The removal of demographic knowledge prevents potential biases from being injected into decision-making and protects privacy since demographic information is no longer available. We explore the proposed method and show that subgroup classifiers can no longer learn from the features projected using our domain adaptation scheme. For source code and data, see https://github.com/visionjo/facerec-bias-bfw.Comment: arXiv admin note: text overlap with arXiv:2102.0894

    AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD)

    Get PDF
    This book is a collection of the accepted papers presented at the Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD) in conjunction with the 36th AAAI Conference on Artificial Intelligence 2022. During AIBSD 2022, the attendees addressed the existing issues of data bias and scarcity in Artificial Intelligence and discussed potential solutions in real-world scenarios. A set of papers presented at AIBSD 2022 is selected for further publication and included in this book

    Fairness Testing: A Comprehensive Survey and Analysis of Trends

    Full text link
    Unfair behaviors of Machine Learning (ML) software have garnered increasing attention and concern among software engineers. To tackle this issue, extensive research has been dedicated to conducting fairness testing of ML software, and this paper offers a comprehensive survey of existing studies in this field. We collect 100 papers and organize them based on the testing workflow (i.e., how to test) and testing components (i.e., what to test). Furthermore, we analyze the research focus, trends, and promising directions in the realm of fairness testing. We also identify widely-adopted datasets and open-source tools for fairness testing

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    • …
    corecore