28 research outputs found

    Algorithmic Fairness from a Non-ideal Perspective

    Get PDF
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade o the degree to which they are satised against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a reinterpretation of impossibility results, and directions for future researc

    Responding to Paradoxical Organisational Demands for AI-Powered Systems considering Fairness

    Get PDF
    Developing and maintaining fair AI is increasingly in demand when unintended ethical issues contaminate the benefits of AI and cause negative implications for individuals and society. Organizations are challenged by simultaneously managing the divergent needs derived from the instrumental and humanistic goals of employing AI. In responding to the challenge, this paper draws on the paradox theory from a sociotechnical lens to first explore the contradictory organizational needs salient in the lifecycle of AI-powered systems. Moreover, we intend to unfold the responding process of the company to illuminate the role of social agents and technical artefacts in the process of managing paradoxical needs. To achieve the intention of the study, we conduct an in-depth case study on an AI-powered talent recruitment system deployed in an IT company. This study will contribute to research and practice regarding how organizational use of digital technologies generates positive ethical implications for individuals and society

    Using Fairness Metrics as Decision-Making Procedures: Algorithmic Fairness and the Problem of Action-Guidance

    Get PDF
    Frameworks for fair machine learning are envisioned to play an important practical role in the evaluation, training, and selection of machine learning models. In particular, fairness metrics are meant to provide responsible agents with actionable standards for evaluating ML models and conditions which those models should achieve. However, recent studies suggest that fair ML frameworks and metrics do not provide sufficient and actionable guidance for agents. This short paper outlines the main content of a working paper wherein I draw lessons from philosophical debates concerning action-guidance to build a conceptual account that can be applied to analyze whether and when fair ML frameworks and metrics can generate determinate evaluations of fairness and actionable prescriptions for model selection.Peer reviewe

    Certification of Distributional Individual Fairness

    Full text link
    Providing formal guarantees of algorithmic fairness is of paramount importance to socially responsible deployment of machine learning algorithms. In this work, we study formal guarantees, i.e., certificates, for individual fairness (IF) of neural networks. We start by introducing a novel convex approximation of IF constraints that exponentially decreases the computational cost of providing formal guarantees of local individual fairness. We highlight that prior methods are constrained by their focus on global IF certification and can therefore only scale to models with a few dozen hidden neurons, thus limiting their practical impact. We propose to certify distributional individual fairness which ensures that for a given empirical distribution and all distributions within a γ\gamma-Wasserstein ball, the neural network has guaranteed individually fair predictions. Leveraging developments in quasi-convex optimization, we provide novel and efficient certified bounds on distributional individual fairness and show that our method allows us to certify and regularize neural networks that are several orders of magnitude larger than those considered by prior works. Moreover, we study real-world distribution shifts and find our bounds to be a scalable, practical, and sound source of IF guarantees.Comment: 21 Pages, Neural Information Processing Systems 202

    Fair equality of chances for prediction-based decisions

    Get PDF
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified substantive views about justice in outcome distributions

    Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

    Full text link
    We show that deep neural networks that satisfy demographic parity do so through a form of race or gender awareness, and that the more we force a network to be fair, the more accurately we can recover race or gender from the internal state of the network. Based on this observation, we propose a simple two-stage solution for enforcing fairness. First, we train a two-headed network to predict the protected attribute (such as race or gender) alongside the original task, and second, we enforce demographic parity by taking a weighted sum of the heads. In the end, this approach creates a single-headed network with the same backbone architecture as the original network. Our approach has near identical performance compared to existing regularization-based or preprocessing methods, but has greater stability and higher accuracy where near exact demographic parity is required. To cement the relationship between these two approaches, we show that an unfair and optimally accurate classifier can be recovered by taking a weighted sum of a fair classifier and a classifier predicting the protected attribute. We use this to argue that both the fairness approaches and our explicit formulation demonstrate disparate treatment and that, consequentially, they are likely to be unlawful in a wide range of scenarios under the US law

    Human supremacy as posthuman risk

    Get PDF
    Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of ethics and public policy. Posthumanism is the historical situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on animal studies, critical posthumanism, and the critique of ideal theory in Charles Mills and Serene Khader to address the appeal to human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis identifies a specific risk posed by human supremacist policy in a posthuman context, namely the classification of agents by type
    corecore