70,764 research outputs found
Eliminating Latent Discrimination: Train Then Mask
How can we control for latent discrimination in predictive models? How can we
provably remove it? Such questions are at the heart of algorithmic fairness and
its impacts on society. In this paper, we define a new operational fairness
criteria, inspired by the well-understood notion of omitted variable-bias in
statistics and econometrics. Our notion of fairness effectively controls for
sensitive features and provides diagnostics for deviations from fair decision
making. We then establish analytical and algorithmic results about the
existence of a fair classifier in the context of supervised learning. Our
results readily imply a simple, but rather counter-intuitive, strategy for
eliminating latent discrimination. In order to prevent other features proxying
for sensitive features, we need to include sensitive features in the training
phase, but exclude them in the test/evaluation phase while controlling for
their effects. We evaluate the performance of our algorithm on several
real-world datasets and show how fairness for these datasets can be improved
with a very small loss in accuracy
Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For
Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individualsâ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a âright to an explanationâ has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic âblack boxâ to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core âalgorithmic war storiesâ that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as âmeaningful information about the logic of processingâ may not be provided by the kind of ML âexplanationsâ computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, âsubject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a âright to an explanationâ in the GDPR may be at best distracting, and at worst nurture a new kind of âtransparency fallacy.â But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered
Algorithms that Remember: Model Inversion Attacks and Data Protection Law
Many individuals are concerned about the governance of machine learning
systems and the prevention of algorithmic harms. The EU's recent General Data
Protection Regulation (GDPR) has been seen as a core tool for achieving better
governance of this area. While the GDPR does apply to the use of models in some
limited situations, most of its provisions relate to the governance of personal
data, while models have traditionally been seen as intellectual property. We
present recent work from the information security literature around `model
inversion' and `membership inference' attacks, which indicate that the process
of turning training data into machine learned systems is not one-way, and
demonstrate how this could lead some models to be legally classified as
personal data. Taking this as a probing experiment, we explore the different
rights and obligations this would trigger and their utility, and posit future
directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur
Artificial intelligence and UK national security: Policy considerations
RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security.
The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
Market driven network neutrality and the fallacies of internet traffic quality regulation
In the U.S. paying for priority arrangements between Internet access service providers and Internet application providers to favor some traffic over other traffic is considered unreasonable discrimination. In Europe the focus is on minimum traffic quality requirements. It can be shown that neither market power nor universal service arguments can justify traffic quality regulation. In particular, heterogeneous demand for traffic quality for delay sensitive versus delay insensitive applications requires traffic quality differentiation, priority pricing and evolutionary development of minimal traffic qualities.
- âŠ