15 research outputs found

    Non-empirical problems in fair machine learning

    Get PDF
    The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover and prevent discrimination, additional assessment criteria, and analyses of the interaction between fairness and predictive accuracy. However, the same framework has also suggested higher-order issues regarding the translation of fairness into metrics and quantifiable trade-offs. Although the (empirical) tools which have been developed so far are essential to address discrimination encoded in data and algorithms, their integration into society elicits key (conceptual) questions such as: What kind of assumptions and decisions underlies the empirical framework? How do the results of the empirical approach penetrate public debate? What kind of reflection and deliberation should stakeholders have over available fairness metrics? I will outline the empirical approach to fair machine learning, i.e. how the problem is framed and addressed, and suggest that there are important non-empirical issues that should be tackled. While this work will focus on the problem of algorithmic fairness, the lesson can extend to other conceptual problems in the analysis of algorithmic decision-making such as privacy and explainability

    Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness

    Full text link
    As governments embrace algorithms, the burgeoning field of algorithmic fairness provides an influential methodology for promoting equality-enhancing reforms. However, even algorithms that satisfy mathematical fairness standards can exacerbate oppression, causing critics to call for the field to shift its focus from "fairness" to "justice." Yet any efforts to achieve algorithmic justice in practice are constrained by a fundamental technical limitation: the "impossibility of fairness" (an incompatibility between mathematical definitions of fairness). The impossibility of fairness thus raises a central question about algorithmic fairness: How can computer scientists support equitable policy reforms with algorithms? In this article, I argue that promoting justice with algorithms requires reforming the methodology of algorithmic fairness. First, I diagnose why the current methodology for algorithmic fairness--which I call "formal algorithmic fairness"--is flawed. I demonstrate that the problems of algorithmic fairness--including the impossibility of fairness--result from the methodology of the field, which restricts analysis to isolated decision-making procedures. Second, I draw on theories of substantive equality from law and philosophy to propose an alternative methodology: "substantive algorithmic fairness." Because substantive algorithmic fairness takes a more expansive scope to fairness, it enables an escape from the impossibility of fairness and provides a rigorous guide for alleviating injustice with algorithms. In sum, substantive algorithmic fairness presents a new direction for algorithmic fairness: away from formal mathematical models of "fairness" and toward substantive evaluations of how algorithms can (and cannot) promote justice

    Impact Remediation: Optimal Interventions to Reduce Inequality

    Full text link
    A significant body of research in the data sciences considers unfair discrimination against social categories such as race or gender that could occur or be amplified as a result of algorithmic decisions. Simultaneously, real-world disparities continue to exist, even before algorithmic decisions are made. In this work, we draw on insights from the social sciences and humanistic studies brought into the realm of causal modeling and constrained optimization, and develop a novel algorithmic framework for tackling pre-existing real-world disparities. The purpose of our framework, which we call the "impact remediation framework," is to measure real-world disparities and discover the optimal intervention policies that could help improve equity or access to opportunity for those who are underserved with respect to an outcome of interest. We develop a disaggregated approach to tackling pre-existing disparities that relaxes the typical set of assumptions required for the use of social categories in structural causal models. Our approach flexibly incorporates counterfactuals and is compatible with various ontological assumptions about the nature of social categories. We demonstrate impact remediation with a real-world case study and compare our disaggregated approach to an existing state-of-the-art approach, comparing its structure and resulting policy recommendations. In contrast to most work on optimal policy learning, we explore disparity reduction itself as an objective, explicitly focusing the power of algorithms on reducing inequality

    On the Apparent Conflict Between Individual and Group Fairness

    Full text link
    A distinction has been drawn in fair machine learning research between `group' and `individual' fairness measures. Many technical research papers assume that both are important, but conflicting, and propose ways to minimise the trade-offs between these measures. This paper argues that this apparent conflict is based on a misconception. It draws on theoretical discussions from within the fair machine learning research, and from political and legal philosophy, to argue that individual and group fairness are not fundamentally in conflict. First, it outlines accounts of egalitarian fairness which encompass plausible motivations for both group and individual fairness, thereby suggesting that there need be no conflict in principle. Second, it considers the concept of individual justice, from legal philosophy and jurisprudence which seems similar but actually contradicts the notion of individual fairness as proposed in the fair machine learning literature. The conclusion is that the apparent conflict between individual and group fairness is more of an artifact of the blunt application of fairness measures, rather than a matter of conflicting principles. In practice, this conflict may be resolved by a nuanced consideration of the sources of `unfairness' in a particular deployment context, and the carefully justified application of measures to mitigate it.Comment: Conference on Fairness, Accountability, and Transparency (FAT* '20), January 27--30, 2020, Barcelona, Spai

    How I Would have been Differently Treated: Discrimination Through the Lens of Counterfactual Fairness

    Full text link
    The widespread use of algorithms for prediction-based decisions urges us to consider the question of what it means for a given act or practice to be discriminatory. Building upon work by Kusner and colleagues in the field of machine learning, we propose a counterfactual condition as a necessary requirement on discrimination. To demonstrate the philosophical relevance of the proposed condition, we consider two prominent accounts of discrimination in the recent literature, by Lippert-Rasmussen and Hellman respectively, that do not logically imply our condition and show that they face important objections. Specifically, Lippert-Rasmussen’s definition proves to be over-inclusive, as it classifies some acts or practices as discriminatory when they are not, whereas Hellman’s account turns out to lack explanatory power precisely insofar as it does not countenance a counterfactual condition on discrimination. By defending the necessity of our counterfactual condition, we set the conceptual limits for justified claims about the occurrence of discriminatory acts or practices in society, with immediate applications to the ethics of algorithmic decision-making

    Tragic Choices and the Virtue of Techno-Responsibility Gaps

    Get PDF
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace certain kinds of responsibility gap. The argument is based on the idea that human morality is often tragic. We frequently confront situations in which competing moral considerations pull in different directions and it is impossible to perfectly balance these considerations. This heightens the burden of responsibility associated with our choices. We cope with the tragedy of moral choice in different ways. Sometimes we delude ourselves into thinking the choices we make were not tragic ; sometimes we delegate the tragic choice to others ; sometimes we make the choice ourselves and bear the psychological consequences. Each of these strategies has its benefits and costs. One potential advantage of autonomous machines is that they enable a reduced cost form of delegation. However, we only gain the advantage of this reduced cost if we accept that some techno-responsibility gaps are virtuous
    corecore