54,842 research outputs found
Enhancing the Accuracy and Fairness of Human Decision Making
Societies often rely on human experts to take a wide variety of decisions
affecting their members, from jail-or-release decisions taken by judges and
stop-and-frisk decisions taken by police officers to accept-or-reject decisions
taken by academics. In this context, each decision is taken by an expert who is
typically chosen uniformly at random from a pool of experts. However, these
decisions may be imperfect due to limited experience, implicit biases, or
faulty probabilistic reasoning. Can we improve the accuracy and fairness of the
overall decision making process by optimizing the assignment between experts
and decisions?
In this paper, we address the above problem from the perspective of
sequential decision making and show that, for different fairness notions from
the literature, it reduces to a sequence of (constrained) weighted bipartite
matchings, which can be solved efficiently using algorithms with approximation
guarantees. Moreover, these algorithms also benefit from posterior sampling to
actively trade off exploitation---selecting expert assignments which lead to
accurate and fair decisions---and exploration---selecting expert assignments to
learn about the experts' preferences and biases. We demonstrate the
effectiveness of our algorithms on both synthetic and real-world data and show
that they can significantly improve both the accuracy and fairness of the
decisions taken by pools of experts
The invisible power of fairness. How machine learning shapes democracy
Many machine learning systems make extensive use of large amounts of data
regarding human behaviors. Several researchers have found various
discriminatory practices related to the use of human-related machine learning
systems, for example in the field of criminal justice, credit scoring and
advertising. Fair machine learning is therefore emerging as a new field of
study to mitigate biases that are inadvertently incorporated into algorithms.
Data scientists and computer engineers are making various efforts to provide
definitions of fairness. In this paper, we provide an overview of the most
widespread definitions of fairness in the field of machine learning, arguing
that the ideas highlighting each formalization are closely related to different
ideas of justice and to different interpretations of democracy embedded in our
culture. This work intends to analyze the definitions of fairness that have
been proposed to date to interpret the underlying criteria and to relate them
to different ideas of democracy.Comment: 12 pages, 1 figure, preprint version, submitted to The 32nd Canadian
Conference on Artificial Intelligence that will take place in Kingston,
Ontario, May 28 to May 31, 201
Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport
Increasingly, discrimination by algorithms is perceived as a societal and
legal problem. As a response, a number of criteria for implementing algorithmic
fairness in machine learning have been developed in the literature. This paper
proposes the Continuous Fairness Algorithm (CFA) which enables a
continuous interpolation between different fairness definitions. More
specifically, we make three main contributions to the existing literature.
First, our approach allows the decision maker to continuously vary between
specific concepts of individual and group fairness. As a consequence, the
algorithm enables the decision maker to adopt intermediate ``worldviews'' on
the degree of discrimination encoded in algorithmic processes, adding nuance to
the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you
get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal
transport theory, and specifically the concept of the barycenter, to maximize
decision maker utility under the chosen fairness constraints. Third, the
algorithm is able to handle cases of intersectionality, i.e., of
multi-dimensional discrimination of certain groups on grounds of several
criteria. We discuss three main examples (credit applications; college
admissions; insurance contracts) and map out the legal and policy implications
of our approach. The explicit formalization of the trade-off between individual
and group fairness allows this post-processing approach to be tailored to
different situational contexts in which one or the other fairness criterion may
take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment
Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy
Fairness concerns about algorithmic decision-making systems have been mainly
focused on the outputs (e.g., the accuracy of a classifier across individuals
or groups). However, one may additionally be concerned with fairness in the
inputs. In this paper, we propose and formulate two properties regarding the
inputs of (features used by) a classifier. In particular, we claim that fair
privacy (whether individuals are all asked to reveal the same information) and
need-to-know (whether users are only asked for the minimal information required
for the task at hand) are desirable properties of a decision system. We explore
the interaction between these properties and fairness in the outputs (fair
prediction accuracy). We show that for an optimal classifier these three
properties are in general incompatible, and we explain what common properties
of data make them incompatible. Finally we provide an algorithm to verify if
the trade-off between the three properties exists in a given dataset, and use
the algorithm to show that this trade-off is common in real data
- …