32,865 research outputs found
Recommender systems fairness evaluation via generalized cross entropy
Fairness in recommender systems has been considered with respect
to sensitive attributes of users (e.g., gender, race) or items (e.g., revenue
in a multistakeholder setting). Regardless, the concept has been
commonly interpreted as some form of equality – i.e., the degree to
which the system is meeting the information needs of all its users in
an equal sense. In this paper, we argue that fairness in recommender
systems does not necessarily imply equality, but instead it should
consider a distribution of resources based on merits and needs.We
present a probabilistic framework based ongeneralized cross entropy
to evaluate fairness of recommender systems under this perspective,
wherewe showthat the proposed framework is flexible and explanatory
by allowing to incorporate domain knowledge (through an ideal
fair distribution) that can help to understand which item or user aspects
a recommendation algorithm is over- or under-representing.
Results on two real-world datasets show the merits of the proposed
evaluation framework both in terms of user and item fairnessThis work was supported in part by the Center for Intelligent Information
Retrieval and in part by project TIN2016-80630-P (MINECO
Controlling Fairness and Bias in Dynamic Learning-to-Rank
Rankings are the primary interface through which many online platforms match
users to items (e.g. news, products, music, video). In these two-sided markets,
not only the users draw utility from the rankings, but the rankings also
determine the utility (e.g. exposure, revenue) for the item providers (e.g.
publishers, sellers, artists, studios). It has already been noted that
myopically optimizing utility to the users, as done by virtually all
learning-to-rank algorithms, can be unfair to the item providers. We,
therefore, present a learning-to-rank approach for explicitly enforcing
merit-based fairness guarantees to groups of items (e.g. articles by the same
publisher, tracks by the same artist). In particular, we propose a learning
algorithm that ensures notions of amortized group fairness, while
simultaneously learning the ranking function from implicit feedback data. The
algorithm takes the form of a controller that integrates unbiased estimators
for both fairness and utility, dynamically adapting both as more data becomes
available. In addition to its rigorous theoretical foundation and convergence
guarantees, we find empirically that the algorithm is highly practical and
robust.Comment: First two authors contributed equally. In Proceedings of the 43rd
International ACM SIGIR Conference on Research and Development in Information
Retrieval 202
Fairness in Information Access Systems
Recommendation, information retrieval, and other information access systems
pose unique challenges for investigating and applying the fairness and
non-discrimination concepts that have been developed for studying other machine
learning systems. While fair information access shares many commonalities with
fair classification, the multistakeholder nature of information access
applications, the rank-based problem setting, the centrality of personalization
in many cases, and the role of user response complicate the problem of
identifying precisely what types and operationalizations of fairness may be
relevant, let alone measuring or promoting them.
In this monograph, we present a taxonomy of the various dimensions of fair
information access and survey the literature to date on this new and
rapidly-growing topic. We preface this with brief introductions to information
access and algorithmic fairness, to facilitate use of this work by scholars
with experience in one (or neither) of these fields who wish to learn about
their intersection. We conclude with several open problems in fair information
access, along with some suggestions for how to approach research in this space
Fairness in Image Search: A Study of Occupational Stereotyping in Image Retrieval and its Debiasing
Multi-modal search engines have experienced significant growth and widespread
use in recent years, making them the second most common internet use. While
search engine systems offer a range of services, the image search field has
recently become a focal point in the information retrieval community, as the
adage goes, "a picture is worth a thousand words". Although popular search
engines like Google excel at image search accuracy and agility, there is an
ongoing debate over whether their search results can be biased in terms of
gender, language, demographics, socio-cultural aspects, and stereotypes. This
potential for bias can have a significant impact on individuals' perceptions
and influence their perspectives.
In this paper, we present our study on bias and fairness in web search, with
a focus on keyword-based image search. We first discuss several kinds of biases
that exist in search systems and why it is important to mitigate them. We
narrow down our study to assessing and mitigating occupational stereotypes in
image search, which is a prevalent fairness issue in image retrieval. For the
assessment of stereotypes, we take gender as an indicator. We explore various
open-source and proprietary APIs for gender identification from images. With
these, we examine the extent of gender bias in top-tanked image search results
obtained for several occupational keywords. To mitigate the bias, we then
propose a fairness-aware re-ranking algorithm that optimizes (a) relevance of
the search result with the keyword and (b) fairness w.r.t genders identified.
We experiment on 100 top-ranked images obtained for 10 occupational keywords
and consider random re-ranking and re-ranking based on relevance as baselines.
Our experimental results show that the fairness-aware re-ranking algorithm
produces rankings with better fairness scores and competitive relevance scores
than the baselines.Comment: 20 Pages, Work uses Proprietary Search Systems from the year 202
Fairness of Exposure in Rankings
Rankings are ubiquitous in the online world today. As we have transitioned
from finding books in libraries to ranking products, jobs, job applicants,
opinions and potential romantic partners, there is a substantial precedent that
ranking systems have a responsibility not only to their users but also to the
items being ranked. To address these often conflicting responsibilities, we
propose a conceptual and computational framework that allows the formulation of
fairness constraints on rankings in terms of exposure allocation. As part of
this framework, we develop efficient algorithms for finding rankings that
maximize the utility for the user while provably satisfying a specifiable
notion of fairness. Since fairness goals can be application specific, we show
how a broad range of fairness constraints can be implemented using our
framework, including forms of demographic parity, disparate treatment, and
disparate impact constraints. We illustrate the effect of these constraints by
providing empirical results on two ranking problems.Comment: In Proceedings of the 24th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, London, UK, 201
How FAIR can you get? Image Retrieval as a Use Case to calculate FAIR Metrics
A large number of services for research data management strive to adhere to
the FAIR guiding principles for scientific data management and stewardship. To
evaluate these services and to indicate possible improvements, use-case-centric
metrics are needed as an addendum to existing metric frameworks. The retrieval
of spatially and temporally annotated images can exemplify such a use case. The
prototypical implementation indicates that currently no research data
repository achieves the full score. Suggestions on how to increase the score
include automatic annotation based on the metadata inside the image file and
support for content negotiation to retrieve the images. These and other
insights can lead to an improvement of data integration workflows, resulting in
a better and more FAIR approach to manage research data.Comment: This is a preprint for a paper accepted for the 2018 IEEE conferenc
- …