86 research outputs found

    On the Apparent Conflict Between Individual and Group Fairness

    Full text link
    A distinction has been drawn in fair machine learning research between `group' and `individual' fairness measures. Many technical research papers assume that both are important, but conflicting, and propose ways to minimise the trade-offs between these measures. This paper argues that this apparent conflict is based on a misconception. It draws on theoretical discussions from within the fair machine learning research, and from political and legal philosophy, to argue that individual and group fairness are not fundamentally in conflict. First, it outlines accounts of egalitarian fairness which encompass plausible motivations for both group and individual fairness, thereby suggesting that there need be no conflict in principle. Second, it considers the concept of individual justice, from legal philosophy and jurisprudence which seems similar but actually contradicts the notion of individual fairness as proposed in the fair machine learning literature. The conclusion is that the apparent conflict between individual and group fairness is more of an artifact of the blunt application of fairness measures, rather than a matter of conflicting principles. In practice, this conflict may be resolved by a nuanced consideration of the sources of `unfairness' in a particular deployment context, and the carefully justified application of measures to mitigate it.Comment: Conference on Fairness, Accountability, and Transparency (FAT* '20), January 27--30, 2020, Barcelona, Spai

    BiasRV: Uncovering Biased Sentiment Predictions at Runtime

    Full text link
    Sentiment analysis (SA) systems, though widely applied in many domains, have been demonstrated to produce biased results. Some research works have been done in automatically generating test cases to reveal unfairness in SA systems, but the community still lacks tools that can monitor and uncover biased predictions at runtime. This paper fills this gap by proposing BiasRV, the first tool to raise an alarm when a deployed SA system makes a biased prediction on a given input text. To implement this feature, BiasRV dynamically extracts a template from an input text and from the template generates gender-discriminatory mutants (semantically-equivalent texts that only differ in gender information). Based on popular metrics used to evaluate the overall fairness of an SA system, we define distributional fairness property for an individual prediction of an SA system. This property specifies a requirement that for one piece of text, mutants from different gender classes should be treated similarly as a whole. Verifying the distributional fairness property causes much overhead to the running system. To run more efficiently, BiasRV adopts a two-step heuristic: (1) sampling several mutants from each gender and checking if the system predicts them as of the same sentiment, (2) checking distributional fairness only when sampled mutants have conflicting results. Experiments show that compared to directly checking the distributional fairness property for each input text, our two-step heuristic can decrease overhead used for analyzing mutants by 73.81% while only resulting in 6.7% of biased predictions being missed. Besides, BiasRV can be used conveniently without knowing the implementation of SA systems. Future researchers can easily extend BiasRV to detect more types of bias, e.g. race and occupation.Comment: Accepted to appear in the Demonstrations track of the ESEC/FSE 202

    Making Fair ML Software using Trustworthy Explanation

    Full text link
    Machine learning software is being used in many applications (finance, hiring, admissions, criminal justice) having a huge social impact. But sometimes the behavior of this software is biased and it shows discrimination based on some sensitive attributes such as sex, race, etc. Prior works concentrated on finding and mitigating bias in ML models. A recent trend is using instance-based model-agnostic explanation methods such as LIME to find out bias in the model prediction. Our work concentrates on finding shortcomings of current bias measures and explanation methods. We show how our proposed method based on K nearest neighbors can overcome those shortcomings and find the underlying bias of black-box models. Our results are more trustworthy and helpful for the practitioners. Finally, We describe our future framework combining explanation and planning to build fair software.Comment: New Ideas and Emerging Results (NIER) track; The 35th IEEE/ACM International Conference on Automated Software Engineering; Melbourne, Australi

    Towards clinical AI fairness: A translational perspective

    Full text link
    Artificial intelligence (AI) has demonstrated the ability to extract insights from data, but the issue of fairness remains a concern in high-stakes fields such as healthcare. Despite extensive discussion and efforts in algorithm development, AI fairness and clinical concerns have not been adequately addressed. In this paper, we discuss the misalignment between technical and clinical perspectives of AI fairness, highlight the barriers to AI fairness' translation to healthcare, advocate multidisciplinary collaboration to bridge the knowledge gap, and provide possible solutions to address the clinical concerns pertaining to AI fairness

    Multiplicative Metric Fairness Under Composition

    Get PDF
    Dwork, Hardt, Pitassi, Reingold, & Zemel [Dwork et al., 2012] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [Chawla and Jagadeesan, 2020; Chawla et al., 2022; Dwork and Ilvento, 2018; Dwork et al., 2020; Ilvento et al., 2020] and in work studying the relationship between fair treatment of individuals and fair treatment of groups [Dwork et al., 2012; Dwork and Ilvento, 2018; Kim et al., 2018]. Here, we extend these lines of research to the second, less-studied notion, which we call multiplicative metric fairness. In particular, we exactly characterize the fairness of conjunctions and disjunctions of multiplicative metric fair classifiers, and the extent to which a classifier which satisfies multiplicative metric fairness also treats groups fairly. This characterization reveals that whereas additive metric fairness becomes easier to satisfy when probabilities of acceptance are small, leading to unfairness under functional and group compositions, multiplicative metric fairness is better-behaved, due to its scale-invariance

    Inherent Limitations of AI Fairness

    Full text link
    As the real-world impact of Artificial Intelligence (AI) systems has been steadily growing, so too have these systems come under increasing scrutiny. In particular, the study of AI fairness has rapidly developed into a rich field of research with links to computer science, social science, law, and philosophy. Though many technical solutions for measuring and achieving AI fairness have been proposed, their model of AI fairness has been widely criticized in recent years for being misleading and unrealistic. In our paper, we survey these criticisms of AI fairness and identify key limitations that are inherent to the prototypical paradigm of AI fairness. By carefully outlining the extent to which technical solutions can realistically help in achieving AI fairness, we aim to provide readers with the background necessary to form a nuanced opinion on developments in the field of fair AI. This delineation also provides research opportunities for non-AI solutions peripheral to AI systems in supporting fair decision processes

    Responding to Paradoxical Organisational Demands for AI-Powered Systems considering Fairness

    Get PDF
    Developing and maintaining fair AI is increasingly in demand when unintended ethical issues contaminate the benefits of AI and cause negative implications for individuals and society. Organizations are challenged by simultaneously managing the divergent needs derived from the instrumental and humanistic goals of employing AI. In responding to the challenge, this paper draws on the paradox theory from a sociotechnical lens to first explore the contradictory organizational needs salient in the lifecycle of AI-powered systems. Moreover, we intend to unfold the responding process of the company to illuminate the role of social agents and technical artefacts in the process of managing paradoxical needs. To achieve the intention of the study, we conduct an in-depth case study on an AI-powered talent recruitment system deployed in an IT company. This study will contribute to research and practice regarding how organizational use of digital technologies generates positive ethical implications for individuals and society
    • …
    corecore