70,840 research outputs found
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity
We map the recently proposed notions of algorithmic fairness to economic
models of Equality of opportunity (EOP)---an extensively studied ideal of
fairness in political philosophy. We formally show that through our conceptual
mapping, many existing definition of algorithmic fairness, such as predictive
value parity and equality of odds, can be interpreted as special cases of EOP.
In this respect, our work serves as a unifying moral framework for
understanding existing notions of algorithmic fairness. Most importantly, this
framework allows us to explicitly spell out the moral assumptions underlying
each notion of fairness, and interpret recent fairness impossibility results in
a new light. Last but not least and inspired by luck egalitarian models of EOP,
we propose a new family of measures for algorithmic fairness. We illustrate our
proposal empirically and show that employing a measure of algorithmic
(un)fairness when its underlying moral assumptions are not satisfied, can have
devastating consequences for the disadvantaged group's welfare
50 Years of Test (Un)fairness: Lessons for Machine Learning
Quantitative definitions of what is unfair and what is fair have been
introduced in multiple disciplines for well over 50 years, including in
education, hiring, and machine learning. We trace how the notion of fairness
has been defined within the testing communities of education and hiring over
the past half century, exploring the cultural and social context in which
different fairness definitions have emerged. In some cases, earlier definitions
of fairness are similar or identical to definitions of fairness in current
machine learning research, and foreshadow current formal work. In other cases,
insights into what fairness means and how to measure it have largely gone
overlooked. We compare past and current notions of fairness along several
dimensions, including the fairness criteria, the focus of the criteria (e.g., a
test, a model, or its use), the relationship of fairness to individuals,
groups, and subgroups, and the mathematical method for measuring fairness
(e.g., classification, regression). This work points the way towards future
research and measurement of (un)fairness that builds from our modern
understanding of fairness while incorporating insights from the past.Comment: FAT* '19: Conference on Fairness, Accountability, and Transparency
(FAT* '19), January 29--31, 2019, Atlanta, GA, US
Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?
Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers. In this work we consider a different motivation; learning from biased training data. We posit several ways in which training data may be biased, including having a more noisy or negatively biased labeling process on members of a disadvantaged group, or a decreased prevalence of positive or negative examples from the disadvantaged group, or both. Given such biased training data, Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution. We examine the ability of fairness-constrained ERM to correct this problem. In particular, we find that the Equal Opportunity fairness constraint [Hardt et al., 2016] combined with ERM will provably recover the Bayes optimal classifier under a range of bias models. We also consider other recovery methods including re-weighting the training data, Equalized Odds, and Demographic Parity, and Calibration. These theoretical results provide additional motivation for considering fairness interventions even if an actor cares primarily about accuracy
Evaluation schedule for the inspection of boarding and residential provision in schools : guidance and grade descriptors for inspecting boarding and residential provision in schools in England
"The evaluation schedule provides outline guidance and grade descriptors for the judgements that inspectors will report on when inspecting boarding and residential provision in schools" -- front cover
Graph-based Semi-Supervised & Active Learning for Edge Flows
We present a graph-based semi-supervised learning (SSL) method for learning
edge flows defined on a graph. Specifically, given flow measurements on a
subset of edges, we want to predict the flows on the remaining edges. To this
end, we develop a computational framework that imposes certain constraints on
the overall flows, such as (approximate) flow conservation. These constraints
render our approach different from classical graph-based SSL for vertex labels,
which posits that tightly connected nodes share similar labels and leverages
the graph structure accordingly to extrapolate from a few vertex labels to the
unlabeled vertices. We derive bounds for our method's reconstruction error and
demonstrate its strong performance on synthetic and real-world flow networks
from transportation, physical infrastructure, and the Web. Furthermore, we
provide two active learning algorithms for selecting informative edges on which
to measure flow, which has applications for optimal sensor deployment. The
first strategy selects edges to minimize the reconstruction error bound and
works well on flows that are approximately divergence-free. The second approach
clusters the graph and selects bottleneck edges that cross cluster-boundaries,
which works well on flows with global trends
- …