43,275 research outputs found
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity
We map the recently proposed notions of algorithmic fairness to economic
models of Equality of opportunity (EOP)---an extensively studied ideal of
fairness in political philosophy. We formally show that through our conceptual
mapping, many existing definition of algorithmic fairness, such as predictive
value parity and equality of odds, can be interpreted as special cases of EOP.
In this respect, our work serves as a unifying moral framework for
understanding existing notions of algorithmic fairness. Most importantly, this
framework allows us to explicitly spell out the moral assumptions underlying
each notion of fairness, and interpret recent fairness impossibility results in
a new light. Last but not least and inspired by luck egalitarian models of EOP,
we propose a new family of measures for algorithmic fairness. We illustrate our
proposal empirically and show that employing a measure of algorithmic
(un)fairness when its underlying moral assumptions are not satisfied, can have
devastating consequences for the disadvantaged group's welfare
Recommended from our members
When users control the algorithms: Values expressed in practices on the twitter platform
Recent interest in ethical AI has brought a slew of values, including fairness, into conversations about technology design. Research in the area of algorithmic fairness tends to be rooted in questions of distribution that can be subject to precise formalism and technical implementation. We seek to expand this conversation to include the experiences of people subject to algorithmic classification and decision-making. By examining tweets about the “Twitter algorithm” we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives. However, we find another important category of concern, evident in attempts to exert control over the algorithm. Twitter users who seek control do so for a variety of reasons, many well justified. We argue for the need for better and clearer definitions of what constitutes legitimate and illegitimate control over algorithmic processes and to consider support for users who wish to enact their own collective choices
Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
As algorithms are increasingly used to make important decisions that affect
human lives, ranging from social benefit assignment to predicting risk of
criminal recidivism, concerns have been raised about the fairness of
algorithmic decision making. Most prior works on algorithmic fairness
normatively prescribe how fair decisions ought to be made. In contrast, here,
we descriptively survey users for how they perceive and reason about fairness
in algorithmic decision making.
A key contribution of this work is the framework we propose to understand why
people perceive certain features as fair or unfair to be used in algorithms.
Our framework identifies eight properties of features, such as relevance,
volitionality and reliability, as latent considerations that inform people's
moral judgments about the fairness of feature use in decision-making
algorithms. We validate our framework through a series of scenario-based
surveys with 576 people. We find that, based on a person's assessment of the
eight latent properties of a feature in our exemplar scenario, we can
accurately (> 85%) predict if the person will judge the use of the feature as
fair.
Our findings have important implications. At a high-level, we show that
people's unfairness concerns are multi-dimensional and argue that future
studies need to address unfairness concerns beyond discrimination. At a
low-level, we find considerable disagreements in people's fairness judgments.
We identify root causes of the disagreements, and note possible pathways to
resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code
available at https://fate-computing.mpi-sws.org/procedural_fairness
A Confidence-Based Approach for Balancing Fairness and Accuracy
We study three classical machine learning algorithms in the context of
algorithmic fairness: adaptive boosting, support vector machines, and logistic
regression. Our goal is to maintain the high accuracy of these learning
algorithms while reducing the degree to which they discriminate against
individuals because of their membership in a protected group.
Our first contribution is a method for achieving fairness by shifting the
decision boundary for the protected group. The method is based on the theory of
margins for boosting. Our method performs comparably to or outperforms previous
algorithms in the fairness literature in terms of accuracy and low
discrimination, while simultaneously allowing for a fast and transparent
quantification of the trade-off between bias and error.
Our second contribution addresses the shortcomings of the bias-error
trade-off studied in most of the algorithmic fairness literature. We
demonstrate that even hopelessly naive modifications of a biased algorithm,
which cannot be reasonably said to be fair, can still achieve low bias and high
accuracy. To help to distinguish between these naive algorithms and more
sensible algorithms we propose a new measure of fairness, called resilience to
random bias (RRB). We demonstrate that RRB distinguishes well between our naive
and sensible fairness algorithms. RRB together with bias and accuracy provides
a more complete picture of the fairness of an algorithm
Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport
Increasingly, discrimination by algorithms is perceived as a societal and
legal problem. As a response, a number of criteria for implementing algorithmic
fairness in machine learning have been developed in the literature. This paper
proposes the Continuous Fairness Algorithm (CFA) which enables a
continuous interpolation between different fairness definitions. More
specifically, we make three main contributions to the existing literature.
First, our approach allows the decision maker to continuously vary between
specific concepts of individual and group fairness. As a consequence, the
algorithm enables the decision maker to adopt intermediate ``worldviews'' on
the degree of discrimination encoded in algorithmic processes, adding nuance to
the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you
get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal
transport theory, and specifically the concept of the barycenter, to maximize
decision maker utility under the chosen fairness constraints. Third, the
algorithm is able to handle cases of intersectionality, i.e., of
multi-dimensional discrimination of certain groups on grounds of several
criteria. We discuss three main examples (credit applications; college
admissions; insurance contracts) and map out the legal and policy implications
of our approach. The explicit formalization of the trade-off between individual
and group fairness allows this post-processing approach to be tailored to
different situational contexts in which one or the other fairness criterion may
take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment
Fair assignment of indivisible objects under ordinal preferences
We consider the discrete assignment problem in which agents express ordinal
preferences over objects and these objects are allocated to the agents in a
fair manner. We use the stochastic dominance relation between fractional or
randomized allocations to systematically define varying notions of
proportionality and envy-freeness for discrete assignments. The computational
complexity of checking whether a fair assignment exists is studied for these
fairness notions. We also characterize the conditions under which a fair
assignment is guaranteed to exist. For a number of fairness concepts,
polynomial-time algorithms are presented to check whether a fair assignment
exists. Our algorithmic results also extend to the case of unequal entitlements
of agents. Our NP-hardness result, which holds for several variants of
envy-freeness, answers an open question posed by Bouveret, Endriss, and Lang
(ECAI 2010). We also propose fairness concepts that always suggest a non-empty
set of assignments with meaningful fairness properties. Among these concepts,
optimal proportionality and optimal weak proportionality appear to be desirable
fairness concepts.Comment: extended version of a paper presented at AAMAS 201
- …
