57,988 research outputs found
Controlling Fairness and Bias in Dynamic Learning-to-Rank
Rankings are the primary interface through which many online platforms match
users to items (e.g. news, products, music, video). In these two-sided markets,
not only the users draw utility from the rankings, but the rankings also
determine the utility (e.g. exposure, revenue) for the item providers (e.g.
publishers, sellers, artists, studios). It has already been noted that
myopically optimizing utility to the users, as done by virtually all
learning-to-rank algorithms, can be unfair to the item providers. We,
therefore, present a learning-to-rank approach for explicitly enforcing
merit-based fairness guarantees to groups of items (e.g. articles by the same
publisher, tracks by the same artist). In particular, we propose a learning
algorithm that ensures notions of amortized group fairness, while
simultaneously learning the ranking function from implicit feedback data. The
algorithm takes the form of a controller that integrates unbiased estimators
for both fairness and utility, dynamically adapting both as more data becomes
available. In addition to its rigorous theoretical foundation and convergence
guarantees, we find empirically that the algorithm is highly practical and
robust.Comment: First two authors contributed equally. In Proceedings of the 43rd
International ACM SIGIR Conference on Research and Development in Information
Retrieval 202
Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport
Increasingly, discrimination by algorithms is perceived as a societal and
legal problem. As a response, a number of criteria for implementing algorithmic
fairness in machine learning have been developed in the literature. This paper
proposes the Continuous Fairness Algorithm (CFA) which enables a
continuous interpolation between different fairness definitions. More
specifically, we make three main contributions to the existing literature.
First, our approach allows the decision maker to continuously vary between
specific concepts of individual and group fairness. As a consequence, the
algorithm enables the decision maker to adopt intermediate ``worldviews'' on
the degree of discrimination encoded in algorithmic processes, adding nuance to
the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you
get'' (WYSIWYG) proposed so far in the literature. Second, we use optimal
transport theory, and specifically the concept of the barycenter, to maximize
decision maker utility under the chosen fairness constraints. Third, the
algorithm is able to handle cases of intersectionality, i.e., of
multi-dimensional discrimination of certain groups on grounds of several
criteria. We discuss three main examples (credit applications; college
admissions; insurance contracts) and map out the legal and policy implications
of our approach. The explicit formalization of the trade-off between individual
and group fairness allows this post-processing approach to be tailored to
different situational contexts in which one or the other fairness criterion may
take precedence. Finally, we evaluate our model experimentally.Comment: Vastly extended new version, now including computational experiment
Individual Fairness in Pipelines
It is well understood that a system built from individually fair components
may not itself be individually fair. In this work, we investigate individual
fairness under pipeline composition. Pipelines differ from ordinary sequential
or repeated composition in that individuals may drop out at any stage, and
classification in subsequent stages may depend on the remaining "cohort" of
individuals. As an example, a company might hire a team for a new project and
at a later point promote the highest performer on the team. Unlike other
repeated classification settings, where the degree of unfairness degrades
gracefully over multiple fair steps, the degree of unfairness in pipelines can
be arbitrary, even in a pipeline with just two stages.
Guided by a panoply of real-world examples, we provide a rigorous framework
for evaluating different types of fairness guarantees for pipelines. We show
that na\"{i}ve auditing is unable to uncover systematic unfairness and that, in
order to ensure fairness, some form of dependence must exist between the design
of algorithms at different stages in the pipeline. Finally, we provide
constructions that permit flexibility at later stages, meaning that there is no
need to lock in the entire pipeline at the time that the early stage is
constructed
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
We present a framework for quantifying and mitigating algorithmic bias in
mechanisms designed for ranking individuals, typically used as part of
web-scale search and recommendation systems. We first propose complementary
measures to quantify bias with respect to protected attributes such as gender
and age. We then present algorithms for computing fairness-aware re-ranking of
results. For a given search or recommendation task, our algorithms seek to
achieve a desired distribution of top ranked results with respect to one or
more protected attributes. We show that such a framework can be tailored to
achieve fairness criteria such as equality of opportunity and demographic
parity depending on the choice of the desired distribution. We evaluate the
proposed algorithms via extensive simulations over different parameter choices,
and study the effect of fairness-aware ranking on both bias and utility
measures. We finally present the online A/B testing results from applying our
framework towards representative ranking in LinkedIn Talent Search, and discuss
the lessons learned in practice. Our approach resulted in tremendous
improvement in the fairness metrics (nearly three fold increase in the number
of search queries with representative results) without affecting the business
metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users
worldwide. Ours is the first large-scale deployed framework for ensuring
fairness in the hiring domain, with the potential positive impact for more than
630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201
Fairness of Exposure in Rankings
Rankings are ubiquitous in the online world today. As we have transitioned
from finding books in libraries to ranking products, jobs, job applicants,
opinions and potential romantic partners, there is a substantial precedent that
ranking systems have a responsibility not only to their users but also to the
items being ranked. To address these often conflicting responsibilities, we
propose a conceptual and computational framework that allows the formulation of
fairness constraints on rankings in terms of exposure allocation. As part of
this framework, we develop efficient algorithms for finding rankings that
maximize the utility for the user while provably satisfying a specifiable
notion of fairness. Since fairness goals can be application specific, we show
how a broad range of fairness constraints can be implemented using our
framework, including forms of demographic parity, disparate treatment, and
disparate impact constraints. We illustrate the effect of these constraints by
providing empirical results on two ranking problems.Comment: In Proceedings of the 24th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, London, UK, 201
Calibrated Fairness in Bandits
We study fairness within the stochastic, \emph{multi-armed bandit} (MAB)
decision making framework. We adapt the fairness framework of "treating similar
individuals similarly" to this setting. Here, an `individual' corresponds to an
arm and two arms are `similar' if they have a similar quality distribution.
First, we adopt a {\em smoothness constraint} that if two arms have a similar
quality distribution then the probability of selecting each arm should be
similar. In addition, we define the {\em fairness regret}, which corresponds to
the degree to which an algorithm is not calibrated, where perfect calibration
requires that the probability of selecting an arm is equal to the probability
with which the arm has the best quality realization. We show that a variation
on Thompson sampling satisfies smooth fairness for total variation distance,
and give an bound on fairness regret. This complements
prior work, which protects an on-average better arm from being less favored. We
also explain how to extend our algorithm to the dueling bandit setting.Comment: To be presented at the FAT-ML'17 worksho
- …