16,319 research outputs found
Inherent Trade-Offs in the Fair Determination of Risk Scores
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them
Operationalizing Individual Fairness with Pairwise Fair Representations
We revisit the notion of individual fairness proposed by Dwork et al. A
central challenge in operationalizing their approach is the difficulty in
eliciting a human specification of a similarity metric. In this paper, we
propose an operationalization of individual fairness that does not rely on a
human specification of a distance metric. Instead, we propose novel approaches
to elicit and leverage side-information on equally deserving individuals to
counter subordination between social groups. We model this knowledge as a
fairness graph, and learn a unified Pairwise Fair Representation (PFR) of the
data that captures both data-driven similarity between individuals and the
pairwise side-information in fairness graph. We elicit fairness judgments from
a variety of sources, including human judgments for two real-world datasets on
recidivism prediction (COMPAS) and violent neighborhood prediction (Crime &
Communities). Our experiments show that the PFR model for operationalizing
individual fairness is practically viable.Comment: To be published in the proceedings of the VLDB Endowment, Vol. 13,
Issue.
iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
People are rated and ranked, towards algorithmic decision making in an
increasing number of applications, typically based on machine learning.
Research on how to incorporate fairness into such tasks has prevalently pursued
the paradigm of group fairness: giving adequate success rates to specifically
protected groups. In contrast, the alternative paradigm of individual fairness
has received relatively little attention, and this paper advances this less
explored direction. The paper introduces a method for probabilistically mapping
user records into a low-rank representation that reconciles individual fairness
and the utility of classifiers and rankings in downstream applications. Our
notion of individual fairness requires that users who are similar in all
task-relevant attributes such as job qualification, and disregarding all
potentially discriminating attributes such as gender, should have similar
outcomes. We demonstrate the versatility of our method by applying it to
classification and learning-to-rank tasks on a variety of real-world datasets.
Our experiments show substantial improvements over the best prior work for this
setting.Comment: Accepted at ICDE 2019. Please cite the ICDE 2019 proceedings versio
Fairness in algorithmic decision-making:trade-offs, policy choices, and procedural protections
This article discusses conceptions of fairness in algorithmic decision-making, within the context of the UK’s legal system. Using practical operational examples of algorithmic tools, it argues that such practices involve inherent technical trade-offs over multiple, competing notions of fairness, which are further exacerbated by policy choices made by those public authorities who use them. This raises major concerns regarding the ability of such choices to affect legal issues in decision-making, and transform legal protections, without adequate legal oversight, or a clear legal framework. This is not to say that the law does not have the capacity to regulate and ensure fairness, but that a more expansive idea of its function is required
- …