2,080 research outputs found
Fairness and Popularity Bias in Recommender Systems: an Empirical Evaluation
In this paper, we present the results of an empirical evaluation investigating how recommendation
algorithms are affected by popularity bias. Popularity bias makes more popular items to be recommended
more frequently than less popular ones, thus it is one of the most relevant issues that limits the fairness
of recommender systems. In particular, we define an experimental protocol based on two state-of-theart datasets containing users’ preferences on movies and books and three different recommendation
paradigms, i.e., collaborative filtering, content-based filtering and graph-based algorithms. In order to
evaluate the overall fairness of the recommendations we use well-known metrics such as Catalogue
Coverage, Gini Index and Group Average Popularity (ΔGAP). The goal of this paper is: (i) to provide a
clear picture of how recommendation techniques are affected by popularity bias; (ii) to trigger further
research in the area aimed to introduce methods to mitigate or reduce biases in order to provide fairer
recommendations
Tidying Up the Conversational Recommender Systems' Biases
The growing popularity of language models has sparked interest in
conversational recommender systems (CRS) within both industry and research
circles. However, concerns regarding biases in these systems have emerged.
While individual components of CRS have been subject to bias studies, a
literature gap remains in understanding specific biases unique to CRS and how
these biases may be amplified or reduced when integrated into complex CRS
models. In this paper, we provide a concise review of biases in CRS by
surveying recent literature. We examine the presence of biases throughout the
system's pipeline and consider the challenges that arise from combining
multiple models. Our study investigates biases in classic recommender systems
and their relevance to CRS. Moreover, we address specific biases in CRS,
considering variations with and without natural language understanding
capabilities, along with biases related to dialogue systems and language
models. Through our findings, we highlight the necessity of adopting a holistic
perspective when dealing with biases in complex CRS models
- …