6,262 research outputs found
From Parity to Preference-based Notions of Fairness in Classification
The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fair-division and envy-freeness literature in economics and game theory and propose preference-based notions of fairness -- given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness
Preference-Informed Fairness
We study notions of fairness in decision-making systems when individuals have
diverse preferences over the possible outcomes of the decisions. Our starting
point is the seminal work of Dwork et al. which introduced a notion of
individual fairness (IF): given a task-specific similarity metric, every pair
of individuals who are similarly qualified according to the metric should
receive similar outcomes. We show that when individuals have diverse
preferences over outcomes, requiring IF may unintentionally lead to
less-preferred outcomes for the very individuals that IF aims to protect. A
natural alternative to IF is the classic notion of fair division, envy-freeness
(EF): no individual should prefer another individual's outcome over their own.
Although EF allows for solutions where all individuals receive a
highly-preferred outcome, EF may also be overly-restrictive. For instance, if
many individuals agree on the best outcome, then if any individual receives
this outcome, they all must receive it, regardless of each individual's
underlying qualifications for the outcome.
We introduce and study a new notion of preference-informed individual
fairness (PIIF) that is a relaxation of both individual fairness and
envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style
constraints, but allows for deviations provided they are in line with
individuals' preferences. We show that PIIF can permit outcomes that are more
favorable to individuals than any IF solution, while providing considerably
more flexibility to the decision-maker than EF. In addition, we show how to
efficiently optimize any convex objective over the outcomes subject to PIIF for
a rich class of individual preferences. Finally, we demonstrate the broad
applicability of the PIIF framework by extending our definitions and algorithms
to the multiple-task targeted advertising setting introduced by Dwork and
Ilvento
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.Comment: Conference: Thirty-second Conference on Neural Information Processing
Systems (NIPS 2018
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity
We map the recently proposed notions of algorithmic fairness to economic
models of Equality of opportunity (EOP)---an extensively studied ideal of
fairness in political philosophy. We formally show that through our conceptual
mapping, many existing definition of algorithmic fairness, such as predictive
value parity and equality of odds, can be interpreted as special cases of EOP.
In this respect, our work serves as a unifying moral framework for
understanding existing notions of algorithmic fairness. Most importantly, this
framework allows us to explicitly spell out the moral assumptions underlying
each notion of fairness, and interpret recent fairness impossibility results in
a new light. Last but not least and inspired by luck egalitarian models of EOP,
we propose a new family of measures for algorithmic fairness. We illustrate our
proposal empirically and show that employing a measure of algorithmic
(un)fairness when its underlying moral assumptions are not satisfied, can have
devastating consequences for the disadvantaged group's welfare
Multiwinner Voting with Fairness Constraints
Multiwinner voting rules are used to select a small representative subset of
candidates or items from a larger set given the preferences of voters. However,
if candidates have sensitive attributes such as gender or ethnicity (when
selecting a committee), or specified types such as political leaning (when
selecting a subset of news items), an algorithm that chooses a subset by
optimizing a multiwinner voting rule may be unbalanced in its selection -- it
may under or over represent a particular gender or political orientation in the
examples above. We introduce an algorithmic framework for multiwinner voting
problems when there is an additional requirement that the selected subset
should be "fair" with respect to a given set of attributes. Our framework
provides the flexibility to (1) specify fairness with respect to multiple,
non-disjoint attributes (e.g., ethnicity and gender) and (2) specify a score
function. We study the computational complexity of this constrained multiwinner
voting problem for monotone and submodular score functions and present several
approximation algorithms and matching hardness of approximation results for
various attribute group structure and types of score functions. We also present
simulations that suggest that adding fairness constraints may not affect the
scores significantly when compared to the unconstrained case.Comment: The conference version of this paper appears in IJCAI-ECAI 201
The invisible power of fairness. How machine learning shapes democracy
Many machine learning systems make extensive use of large amounts of data
regarding human behaviors. Several researchers have found various
discriminatory practices related to the use of human-related machine learning
systems, for example in the field of criminal justice, credit scoring and
advertising. Fair machine learning is therefore emerging as a new field of
study to mitigate biases that are inadvertently incorporated into algorithms.
Data scientists and computer engineers are making various efforts to provide
definitions of fairness. In this paper, we provide an overview of the most
widespread definitions of fairness in the field of machine learning, arguing
that the ideas highlighting each formalization are closely related to different
ideas of justice and to different interpretations of democracy embedded in our
culture. This work intends to analyze the definitions of fairness that have
been proposed to date to interpret the underlying criteria and to relate them
to different ideas of democracy.Comment: 12 pages, 1 figure, preprint version, submitted to The 32nd Canadian
Conference on Artificial Intelligence that will take place in Kingston,
Ontario, May 28 to May 31, 201
- …