120,491 research outputs found
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a modelâs development, not just explanations of the model itself
Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
As algorithms are increasingly used to make important decisions that affect
human lives, ranging from social benefit assignment to predicting risk of
criminal recidivism, concerns have been raised about the fairness of
algorithmic decision making. Most prior works on algorithmic fairness
normatively prescribe how fair decisions ought to be made. In contrast, here,
we descriptively survey users for how they perceive and reason about fairness
in algorithmic decision making.
A key contribution of this work is the framework we propose to understand why
people perceive certain features as fair or unfair to be used in algorithms.
Our framework identifies eight properties of features, such as relevance,
volitionality and reliability, as latent considerations that inform people's
moral judgments about the fairness of feature use in decision-making
algorithms. We validate our framework through a series of scenario-based
surveys with 576 people. We find that, based on a person's assessment of the
eight latent properties of a feature in our exemplar scenario, we can
accurately (> 85%) predict if the person will judge the use of the feature as
fair.
Our findings have important implications. At a high-level, we show that
people's unfairness concerns are multi-dimensional and argue that future
studies need to address unfairness concerns beyond discrimination. At a
low-level, we find considerable disagreements in people's fairness judgments.
We identify root causes of the disagreements, and note possible pathways to
resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code
available at https://fate-computing.mpi-sws.org/procedural_fairness
Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport
Increasingly, discrimination by algorithms is perceived as a societal and
legal problem. As a response, a number of criteria for implementing algorithmic
fairness in machine learning have been developed in the literature. This paper
proposes the Continuous Fairness Algorithm (CFA) which enables a
continuous interpolation between different fairness definitions. More
specifically, we make three main contributions to the existing literature.
First, our approach allows the decision maker to continuously vary between
specific concepts of individual and group fairness. As a consequence, the
algorithm enables the decision maker to adopt intermediate ``worldviews'' on
the degree of discrimination encoded in algorithmic processes, adding nuance to
the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you
get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal
transport theory, and specifically the concept of the barycenter, to maximize
decision maker utility under the chosen fairness constraints. Third, the
algorithm is able to handle cases of intersectionality, i.e., of
multi-dimensional discrimination of certain groups on grounds of several
criteria. We discuss three main examples (credit applications; college
admissions; insurance contracts) and map out the legal and policy implications
of our approach. The explicit formalization of the trade-off between individual
and group fairness allows this post-processing approach to be tailored to
different situational contexts in which one or the other fairness criterion may
take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment
Privacy as personal resistance: exploring legal narratology and the need for a legal architecture for personal privacy rights
Different cultures produce different privacies â both architecturally and legally speaking â as well as in their different legal architectures. The âSimms principleâ can be harnessed to produce semi-constitutional privacy protection through statute; building on the work already done in âbringing rights homeâ through the Human Rights Act 1998. This article attempts to set out a notion of semi-entrenched legal rights, which will help to better portray the case for architectural, constitutional privacy, following an examination of the problems with a legal narrative for privacy rights as they currently exist. I will use parallel ideas from the works of W.B. Yeats and Costas Douzinas to explore and critique these assumptions and arguments. The ultimate object of this piece is an argument for the creation of a legal instrument, namely an Act of Parliament, in the United Kingdom; the purpose of which is to protect certain notions of personal privacy from politically-motivated erosion and intrusion
Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individualsâ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a âright to an explanationâ has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic âblack boxâ to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core âalgorithmic war storiesâ that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as âmeaningful information about the logic of processingâ may not be provided by the kind of ML âexplanationsâ computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, âsubject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a âright to an explanationâ in the GDPR may be at best distracting, and at worst nurture a new kind of âtransparency fallacy.â But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
Eliminating Latent Discrimination: Train Then Mask
How can we control for latent discrimination in predictive models? How can we
provably remove it? Such questions are at the heart of algorithmic fairness and
its impacts on society. In this paper, we define a new operational fairness
criteria, inspired by the well-understood notion of omitted variable-bias in
statistics and econometrics. Our notion of fairness effectively controls for
sensitive features and provides diagnostics for deviations from fair decision
making. We then establish analytical and algorithmic results about the
existence of a fair classifier in the context of supervised learning. Our
results readily imply a simple, but rather counter-intuitive, strategy for
eliminating latent discrimination. In order to prevent other features proxying
for sensitive features, we need to include sensitive features in the training
phase, but exclude them in the test/evaluation phase while controlling for
their effects. We evaluate the performance of our algorithm on several
real-world datasets and show how fairness for these datasets can be improved
with a very small loss in accuracy
- âŚ