91 research outputs found
Transparent, explainable, and accountable AI for robotics
To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
There has been much discussion of the right to explanation in the EU General
Data Protection Regulation, and its existence, merits, and disadvantages.
Implementing a right to explanation that opens the black box of algorithmic
decision-making faces major legal and technical barriers. Explaining the
functionality of complex algorithmic decision-making systems and their
rationale in specific cases is a technically challenging problem. Some
explanations may offer little meaningful information to data subjects, raising
questions around their value. Explanations of automated decisions need not
hinge on the general public understanding how algorithmic systems function.
Even though such interpretability is of great importance and should be pursued,
explanations can, in principle, be offered without opening the black box.
Looking at explanations as a means to help a data subject act rather than
merely understand, one could gauge the scope and content of explanations
according to the specific goal or action they are intended to support. From the
perspective of individuals affected by automated decision-making, we propose
three aims for explanations: (1) to inform and help the individual understand
why a particular decision was reached, (2) to provide grounds to contest the
decision if the outcome is undesired, and (3) to understand what would need to
change in order to receive a desired result in the future, based on the current
decision-making model. We assess how each of these goals finds support in the
GDPR. We suggest data controllers should offer a particular type of
explanation, unconditional counterfactual explanations, to support these three
aims. These counterfactual explanations describe the smallest change to the
world that can be made to obtain a desirable outcome, or to arrive at the
closest possible world, without needing to explain the internal logic of the
system
Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI
This article identifies a critical incompatibility between European notions
of discrimination and existing statistical measures of fairness. First, we
review the evidential requirements to bring a claim under EU non-discrimination
law. Due to the disparate nature of algorithmic and human discrimination, the
EU's current requirements are too contextual, reliant on intuition, and open to
judicial interpretation to be automated. Second, we show how the legal
protection offered by non-discrimination law is challenged when AI, not humans,
discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes,
prejudice) and unintentional biases (e.g. organisational practices or
internalised stereotypes) which can act as a signal to victims that
discrimination has occurred. Finally, we examine how existing work on fairness
in machine learning lines up with procedures for assessing cases under EU
non-discrimination law. We propose "conditional demographic disparity" (CDD) as
a standard baseline statistical measurement that aligns with the European Court
of Justice's "gold standard." Establishing a standard set of statistical
evidence for automated discrimination cases can help ensure consistent
procedures for assessment, but not judicial interpretation, of cases involving
AI and automated systems. Through this proposal for procedural regularity in
the identification and assessment of automated discrimination, we clarify how
to build considerations of fairness into automated systems as far as possible
while still respecting and enabling the contextual approach to judicial
interpretation practiced under EU non-discrimination law.
N.B. Abridged abstrac
The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default
In recent years fairness in machine learning (ML) has emerged as a highly
active area of research and development. Most define fairness in simple terms,
where fairness means reducing gaps in performance or outcomes between
demographic groups while preserving as much of the accuracy of the original
system as possible. This oversimplification of equality through fairness
measures is troubling. Many current fairness measures suffer from both fairness
and performance degradation, or "levelling down," where fairness is achieved by
making every group worse off, or by bringing better performing groups down to
the level of the worst off. When fairness can only be achieved by making
everyone worse off in material or relational terms through injuries of stigma,
loss of solidarity, unequal concern, and missed opportunities for substantive
equality, something would appear to have gone wrong in translating the vague
concept of 'fairness' into practice. This paper examines the causes and
prevalence of levelling down across fairML, and explore possible justifications
and criticisms based on philosophical and legal theories of equality and
distributive justice, as well as equality law jurisprudence. We find that
fairML does not currently engage in the type of measurement, reporting, or
analysis necessary to justify levelling down in practice. We propose a first
step towards substantive equality in fairML: "levelling up" systems by design
through enforcement of minimum acceptable harm thresholds, or "minimum rate
constraints," as fairness constraints. We likewise propose an alternative
harms-based framework to counter the oversimplified egalitarian framing
currently dominant in the field and push future discussion more towards
substantive equality opportunities and away from strict egalitarianism by
default. N.B. Shortened abstract, see paper for full abstract
The ethics of algorithms: mapping the debate
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms
Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative and policy steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018
Recommended from our members
Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values
Recommendations and User Agency: The Reachability of Collaboratively-Filtered Information
Recommender systems often rely on models which are trained to maximize
accuracy in predicting user preferences. When the systems are deployed, these
models determine the availability of content and information to different
users. The gap between these objectives gives rise to a potential for
unintended consequences, contributing to phenomena such as filter bubbles and
polarization. In this work, we consider directly the information availability
problem through the lens of user recourse. Using ideas of reachability, we
propose a computationally efficient audit for top- linear recommender
models. Furthermore, we describe the relationship between model complexity and
the effort necessary for users to exert control over their recommendations. We
use this insight to provide a novel perspective on the user cold-start problem.
Finally, we demonstrate these concepts with an empirical investigation of a
state-of-the-art model trained on a widely used movie ratings dataset.Comment: appeared at FAccT '2
FACE:Feasible and Actionable Counterfactual Explanations
Work in Counterfactual Explanations tends to focus on the principle of "the
closest possible world" that identifies small changes leading to the desired
outcome. In this paper we argue that while this approach might initially seem
intuitively appealing it exhibits shortcomings not addressed in the current
literature. First, a counterfactual example generated by the state-of-the-art
systems is not necessarily representative of the underlying data distribution,
and may therefore prescribe unachievable goals(e.g., an unsuccessful life
insurance applicant with severe disability may be advised to do more sports).
Secondly, the counterfactuals may not be based on a "feasible path" between the
current state of the subject and the suggested one, making actionable recourse
infeasible (e.g., low-skilled unsuccessful mortgage applicants may be told to
double their salary, which may be hard without first increasing their skill
level). These two shortcomings may render counterfactual explanations
impractical and sometimes outright offensive. To address these two major flaws,
first of all, we propose a new line of Counterfactual Explanations research
aimed at providing actionable and feasible paths to transform a selected
instance into one that meets a certain goal. Secondly, we propose FACE: an
algorithmically sound way of uncovering these "feasible paths" based on the
shortest path distances defined via density-weighted metrics. Our approach
generates counterfactuals that are coherent with the underlying data
distribution and supported by the "feasible paths" of change, which are
achievable and can be tailored to the problem at hand.Comment: Presented at AAAI/ACM Conference on AI, Ethics, and Society 202
- …