5,536 research outputs found
A Survey on Popularity Bias in Recommender Systems
Recommender systems help people find relevant content in a personalized way.
One main promise of such systems is that they are able to increase the
visibility of items in the long tail, i.e., the lesser-known items in a
catalogue. Existing research, however, suggests that in many situations today's
recommendation algorithms instead exhibit a popularity bias, meaning that they
often focus on rather popular items in their recommendations. Such a bias may
not only lead to limited value of the recommendations for consumers and
providers in the short run, but it may also cause undesired reinforcement
effects over time. In this paper, we discuss the potential reasons for
popularity bias and we review existing approaches to detect, quantify and
mitigate popularity bias in recommender systems. Our survey therefore includes
both an overview of the computational metrics used in the literature as well as
a review of the main technical approaches to reduce the bias. We furthermore
critically discuss today's literature, where we observe that the research is
almost entirely based on computational experiments and on certain assumptions
regarding the practical effects of including long-tail items in the
recommendations.Comment: Under review, submitted to UMUA
Reducing Popularity Bias in Recommender Systems through AUC-Optimal Negative Sampling
Popularity bias is a persistent issue associated with recommendation systems,
posing challenges to both fairness and efficiency. Existing literature widely
acknowledges that reducing popularity bias often requires sacrificing
recommendation accuracy. In this paper, we challenge this commonly held belief.
Our analysis under general bias-variance decomposition framework shows that
reducing bias can actually lead to improved model performance under certain
conditions. To achieve this win-win situation, we propose to intervene in model
training through negative sampling thereby modifying model predictions.
Specifically, we provide an optimal negative sampling rule that maximizes
partial AUC to preserve the accuracy of any given model, while correcting
sample information and prior information to reduce popularity bias in a
flexible and principled way. Our experimental results on real-world datasets
demonstrate the superiority of our approach in improving recommendation
performance and reducing popularity bias.Comment: 20 page
A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research
The design of algorithms that generate personalized ranked item lists is a
central topic of research in the field of recommender systems. In the past few
years, in particular, approaches based on deep learning (neural) techniques
have become dominant in the literature. For all of them, substantial progress
over the state-of-the-art is claimed. However, indications exist of certain
problems in today's research practice, e.g., with respect to the choice and
optimization of the baselines used for comparison, raising questions about the
published claims. In order to obtain a better understanding of the actual
progress, we have tried to reproduce recent results in the area of neural
recommendation approaches based on collaborative filtering. The worrying
outcome of the analysis of these recent works-all were published at prestigious
scientific conferences between 2015 and 2018-is that 11 out of the 12
reproducible neural approaches can be outperformed by conceptually simple
methods, e.g., based on the nearest-neighbor heuristics. None of the
computationally complex neural methods was actually consistently better than
already existing learning-based techniques, e.g., using matrix factorization or
linear models. In our analysis, we discuss common issues in today's research
practice, which, despite the many papers that are published on the topic, have
apparently led the field to a certain level of stagnation.Comment: Source code and full results available at:
https://github.com/MaurizioFD/RecSys2019_DeepLearning_Evaluatio
Metric Optimization and Mainstream Bias Mitigation in Recommender Systems
The first part of this thesis focuses on maximizing the overall
recommendation accuracy. This accuracy is usually evaluated with some
user-oriented metric tailored to the recommendation scenario, but because
recommendation is usually treated as a machine learning problem, recommendation
models are trained to maximize some other generic criteria that does not
necessarily align with the criteria ultimately captured by the user-oriented
evaluation metric. Recent research aims at bridging this gap between training
and evaluation via direct ranking optimization, but still assumes that the
metric used for evaluation should also be the metric used for training. We
challenge this assumption, mainly because some metrics are more informative
than others. Indeed, we show that models trained via the optimization of a loss
inspired by Rank-Biased Precision (RBP) tend to yield higher accuracy, even
when accuracy is measured with metrics other than RBP. However, the superiority
of this RBP-inspired loss stems from further benefiting users who are already
well-served, rather than helping those who are not.
This observation inspires the second part of this thesis, where our focus
turns to helping non-mainstream users. These are users who are difficult to
recommend to either because there is not enough data to model them, or because
they have niche taste and thus few similar users to look at when recommending
in a collaborative way. These differences in mainstreamness introduce a bias
reflected in an accuracy gap between users or user groups, which we try to
narrow.Comment: PhD Thesis defended on Nov 14, 202
Web information search and sharing :
制度:新 ; 報告番号:甲2735号 ; 学位の種類:博士(人間科学) ; 授与年月日:2009/3/15 ; 早大学位記番号:新493
- …