7,349 research outputs found
Transparency in Complex Computational Systems
Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s..
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
There has been much discussion of the right to explanation in the EU General
Data Protection Regulation, and its existence, merits, and disadvantages.
Implementing a right to explanation that opens the black box of algorithmic
decision-making faces major legal and technical barriers. Explaining the
functionality of complex algorithmic decision-making systems and their
rationale in specific cases is a technically challenging problem. Some
explanations may offer little meaningful information to data subjects, raising
questions around their value. Explanations of automated decisions need not
hinge on the general public understanding how algorithmic systems function.
Even though such interpretability is of great importance and should be pursued,
explanations can, in principle, be offered without opening the black box.
Looking at explanations as a means to help a data subject act rather than
merely understand, one could gauge the scope and content of explanations
according to the specific goal or action they are intended to support. From the
perspective of individuals affected by automated decision-making, we propose
three aims for explanations: (1) to inform and help the individual understand
why a particular decision was reached, (2) to provide grounds to contest the
decision if the outcome is undesired, and (3) to understand what would need to
change in order to receive a desired result in the future, based on the current
decision-making model. We assess how each of these goals finds support in the
GDPR. We suggest data controllers should offer a particular type of
explanation, unconditional counterfactual explanations, to support these three
aims. These counterfactual explanations describe the smallest change to the
world that can be made to obtain a desirable outcome, or to arrive at the
closest possible world, without needing to explain the internal logic of the
system
Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
Recommended from our members
Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems
This paper introduces reviewability as a framework for improving the
accountability of automated and algorithmic decision-making (ADM) involving
machine learning. We draw on an understanding of ADM as a socio-technical
process involving both human and technical elements, beginning before a
decision is made and extending beyond the decision itself. While explanations
and other model-centric mechanisms may assist some accountability concerns,
they often provide insufficient information of these broader ADM processes for
regulatory oversight and assessments of legal compliance. Reviewability
involves breaking down the ADM process into technical and organisational
elements to provide a systematic framework for determining the contextually
appropriate record-keeping mechanisms to facilitate meaningful review - both of
individual decisions and of the process as a whole. We argue that a
reviewability framework, drawing on administrative law's approach to reviewing
human decision-making, offers a practical way forward towards more a more
holistic and legally-relevant form of accountability for ADM
Understanding from Machine Learning Models
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding
Recommended from our members
When users control the algorithms: Values expressed in practices on the twitter platform
Recent interest in ethical AI has brought a slew of values, including fairness, into conversations about technology design. Research in the area of algorithmic fairness tends to be rooted in questions of distribution that can be subject to precise formalism and technical implementation. We seek to expand this conversation to include the experiences of people subject to algorithmic classification and decision-making. By examining tweets about the “Twitter algorithm” we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives. However, we find another important category of concern, evident in attempts to exert control over the algorithm. Twitter users who seek control do so for a variety of reasons, many well justified. We argue for the need for better and clearer definitions of what constitutes legitimate and illegitimate control over algorithmic processes and to consider support for users who wish to enact their own collective choices
- …