38 research outputs found
Designing Fair Ranking Schemes
Items from a database are often ranked based on a combination of multiple
criteria. A user may have the flexibility to accept combinations that weigh
these criteria differently, within limits. On the other hand, this choice of
weights can greatly affect the fairness of the produced ranking. In this paper,
we develop a system that helps users choose criterion weights that lead to
greater fairness.
We consider ranking functions that compute the score of each item as a
weighted sum of (numeric) attribute values, and then sort items on their score.
Each ranking function can be expressed as a vector of weights, or as a point in
a multi-dimensional space. For a broad range of fairness criteria, we show how
to efficiently identify regions in this space that satisfy these criteria.
Using this identification method, our system is able to tell users whether
their proposed ranking function satisfies the desired fairness criteria and, if
it does not, to suggest the smallest modification that does. We develop
user-controllable approximation that and indexing techniques that are applied
during preprocessing, and support sub-second response times during the online
phase. Our extensive experiments on real datasets demonstrate that our methods
are able to find solutions that satisfy fairness criteria effectively and
efficiently
Decision making with fair ranking
Abstract and Figures
Ranking is a responsible process because it involves working with sensitive attributes that can discriminate alternatives. Due to the availability of a large amount of data for automated processing, ranking is increasingly in use in decision making. Therefore, concepts of algorithmic fairness in the field of classification in machine learning find their place in fair ranking methods. This paper provides an overview of fair ranking terms, fair ranking challenges, and fair ranking algorithms from the state-of-the-art literature
Fairness of Exposure in Rankings
Rankings are ubiquitous in the online world today. As we have transitioned
from finding books in libraries to ranking products, jobs, job applicants,
opinions and potential romantic partners, there is a substantial precedent that
ranking systems have a responsibility not only to their users but also to the
items being ranked. To address these often conflicting responsibilities, we
propose a conceptual and computational framework that allows the formulation of
fairness constraints on rankings in terms of exposure allocation. As part of
this framework, we develop efficient algorithms for finding rankings that
maximize the utility for the user while provably satisfying a specifiable
notion of fairness. Since fairness goals can be application specific, we show
how a broad range of fairness constraints can be implemented using our
framework, including forms of demographic parity, disparate treatment, and
disparate impact constraints. We illustrate the effect of these constraints by
providing empirical results on two ranking problems.Comment: In Proceedings of the 24th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, London, UK, 201
Rankers, Rankees, & Rankings: Peeking into the Pandora's Box from a Socio-Technical Perspective
Algorithmic rankers have a profound impact on our increasingly data-driven
society. From leisurely activities like the movies that we watch, the
restaurants that we patronize; to highly consequential decisions, like making
educational and occupational choices or getting hired by companies -- these are
all driven by sophisticated yet mostly inaccessible rankers. A small change to
how these algorithms process the rankees (i.e., the data items that are ranked)
can have profound consequences. For example, a change in rankings can lead to
deterioration of the prestige of a university or have drastic consequences on a
job candidate who missed out being in the list of the preferred top-k for an
organization. This paper is a call to action to the human-centered data science
research community to develop principled methods, measures, and metrics for
studying the interactions among the socio-technical context of use,
technological innovations, and the resulting consequences of algorithmic
rankings on multiple stakeholders. Given the spate of new legislations on
algorithmic accountability, it is imperative that researchers from social
science, human-computer interaction, and data science work in unison for
demystifying how rankings are produced, who has agency to change them, and what
metrics of socio-technical impact one must use for informing the context of
use.Comment: Accepted for Interrogating Human-Centered Data Science workshop at
CHI'2
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
We present a framework for quantifying and mitigating algorithmic bias in
mechanisms designed for ranking individuals, typically used as part of
web-scale search and recommendation systems. We first propose complementary
measures to quantify bias with respect to protected attributes such as gender
and age. We then present algorithms for computing fairness-aware re-ranking of
results. For a given search or recommendation task, our algorithms seek to
achieve a desired distribution of top ranked results with respect to one or
more protected attributes. We show that such a framework can be tailored to
achieve fairness criteria such as equality of opportunity and demographic
parity depending on the choice of the desired distribution. We evaluate the
proposed algorithms via extensive simulations over different parameter choices,
and study the effect of fairness-aware ranking on both bias and utility
measures. We finally present the online A/B testing results from applying our
framework towards representative ranking in LinkedIn Talent Search, and discuss
the lessons learned in practice. Our approach resulted in tremendous
improvement in the fairness metrics (nearly three fold increase in the number
of search queries with representative results) without affecting the business
metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users
worldwide. Ours is the first large-scale deployed framework for ensuring
fairness in the hiring domain, with the potential positive impact for more than
630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201
Operationalizing Individual Fairness with Pairwise Fair Representations
We revisit the notion of individual fairness proposed by Dwork et al. A
central challenge in operationalizing their approach is the difficulty in
eliciting a human specification of a similarity metric. In this paper, we
propose an operationalization of individual fairness that does not rely on a
human specification of a distance metric. Instead, we propose novel approaches
to elicit and leverage side-information on equally deserving individuals to
counter subordination between social groups. We model this knowledge as a
fairness graph, and learn a unified Pairwise Fair Representation (PFR) of the
data that captures both data-driven similarity between individuals and the
pairwise side-information in fairness graph. We elicit fairness judgments from
a variety of sources, including human judgments for two real-world datasets on
recidivism prediction (COMPAS) and violent neighborhood prediction (Crime &
Communities). Our experiments show that the PFR model for operationalizing
individual fairness is practically viable.Comment: To be published in the proceedings of the VLDB Endowment, Vol. 13,
Issue.