37,262 research outputs found
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
We present a framework for quantifying and mitigating algorithmic bias in
mechanisms designed for ranking individuals, typically used as part of
web-scale search and recommendation systems. We first propose complementary
measures to quantify bias with respect to protected attributes such as gender
and age. We then present algorithms for computing fairness-aware re-ranking of
results. For a given search or recommendation task, our algorithms seek to
achieve a desired distribution of top ranked results with respect to one or
more protected attributes. We show that such a framework can be tailored to
achieve fairness criteria such as equality of opportunity and demographic
parity depending on the choice of the desired distribution. We evaluate the
proposed algorithms via extensive simulations over different parameter choices,
and study the effect of fairness-aware ranking on both bias and utility
measures. We finally present the online A/B testing results from applying our
framework towards representative ranking in LinkedIn Talent Search, and discuss
the lessons learned in practice. Our approach resulted in tremendous
improvement in the fairness metrics (nearly three fold increase in the number
of search queries with representative results) without affecting the business
metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users
worldwide. Ours is the first large-scale deployed framework for ensuring
fairness in the hiring domain, with the potential positive impact for more than
630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201
Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy
Fairness concerns about algorithmic decision-making systems have been mainly
focused on the outputs (e.g., the accuracy of a classifier across individuals
or groups). However, one may additionally be concerned with fairness in the
inputs. In this paper, we propose and formulate two properties regarding the
inputs of (features used by) a classifier. In particular, we claim that fair
privacy (whether individuals are all asked to reveal the same information) and
need-to-know (whether users are only asked for the minimal information required
for the task at hand) are desirable properties of a decision system. We explore
the interaction between these properties and fairness in the outputs (fair
prediction accuracy). We show that for an optimal classifier these three
properties are in general incompatible, and we explain what common properties
of data make them incompatible. Finally we provide an algorithm to verify if
the trade-off between the three properties exists in a given dataset, and use
the algorithm to show that this trade-off is common in real data
AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing
Recently, many AI researchers and practitioners have embarked on research
visions that involve doing AI for "Good". This is part of a general drive
towards infusing AI research and practice with ethical thinking. One frequent
theme in current ethical guidelines is the requirement that AI be good for all,
or: contribute to the Common Good. But what is the Common Good, and is it
enough to want to be good? Via four lead questions, I will illustrate
challenges and pitfalls when determining, from an AI point of view, what the
Common Good is and how it can be enhanced by AI. The questions are: What is the
problem / What is a problem?, Who defines the problem?, What is the role of
knowledge?, and What are important side effects and dynamics? The illustration
will use an example from the domain of "AI for Social Good", more specifically
"Data Science for Social Good". Even if the importance of these questions may
be known at an abstract level, they do not get asked sufficiently in practice,
as shown by an exploratory study of 99 contributions to recent conferences in
the field. Turning these challenges and pitfalls into a positive
recommendation, as a conclusion I will draw on another characteristic of
computer-science thinking and practice to make these impediments visible and
attenuate them: "attacks" as a method for improving design. This results in the
proposal of ethics pen-testing as a method for helping AI designs to better
contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on
27-10-201
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.Comment: Conference: Thirty-second Conference on Neural Information Processing
Systems (NIPS 2018
- …