1,147 research outputs found
Incorporating System-Level Objectives into Recommender Systems
One of the most essential parts of any recommender system is
personalization-- how acceptable the recommendations are from the user's
perspective. However, in many real-world applications, there are other
stakeholders whose needs and interests should be taken into account. In this
work, we define the problem of multistakeholder recommendation and we focus on
finding algorithms for a special case where the recommender system itself is
also a stakeholder. In addition, we will explore the idea of incremental
incorporation of system-level objectives into recommender systems over time to
tackle the existing problems in the optimization techniques which only look for
optimizing the individual users' lists.Comment: arXiv admin note: text overlap with arXiv:1901.0755
Mitigation of Popularity Bias in Recommendation Systems
In response to the quantity of information available on the Internet, many online service providers are attempting to customize their services and make content access more simple via recommender systems (RSs) to support users in discovering the products they are most likely interested in. However, these recommendation systems are prone to popularity bias, which is a tendency to promote popular items even if they do not satisfy a user’s preferences and then provide customers with recommendations of poor quality. Such a bias has a negative influence on both users and item providers. It is then essential to mitigate such bias in order to guarantee that less popular but pertinent items show up on the user’s
recommendation list. In this work, we conduct an empirical analysis of different mitigation techniques for popularity bias to provide an overview of the present state of the art of popularity bias and raise the fairness issue in RSs
Popularity Bias in Recommendation: A Multi-stakeholder Perspective
Traditionally, especially in academic research in recommender systems, the
focus has been solely on the satisfaction of the end-user. While user
satisfaction has, indeed, been associated with the success of the business, it
is not the only factor. In many recommendation domains, there are other
stakeholders whose needs should be taken into account in the recommendation
generation and evaluation. In this dissertation, I describe the notion of
multi-stakeholder recommendation. In particular, I study one of the most
important challenges in recommendation research, popularity bias, from a
multi-stakeholder perspective since, as I show later in this dissertation, it
impacts different stakeholders in a recommender system. Popularity bias is a
well-known phenomenon in recommender systems where popular items are
recommended even more frequently than their popularity would warrant,
amplifying long-tail effects already present in many recommendation domains.
Prior research has examined various approaches for mitigating popularity bias
and enhancing the recommendation of long-tail items overall. The effectiveness
of these approaches, however, has not been assessed in multi-stakeholder
environments. In this dissertation, I study the impact of popularity bias in
recommender systems from a multi-stakeholder perspective. In addition, I
propose several algorithms each approaching the popularity bias mitigation from
a different angle and compare their performances using several metrics with
some other state-of-the-art approaches in the literature. I show that, often,
the standard evaluation measures of popularity bias mitigation in the
literature do not reflect the real picture of an algorithm's performance when
it is evaluated from a multi-stakeholder point of view.Comment: PhD Dissertation in Information Science (University of Colorado
Boulder
DPR: An Algorithm Mitigate Bias Accumulation in Recommendation feedback loops
Recommendation models trained on the user feedback collected from deployed
recommendation systems are commonly biased. User feedback is considerably
affected by the exposure mechanism, as users only provide feedback on the items
exposed to them and passively ignore the unexposed items, thus producing
numerous false negative samples. Inevitably, biases caused by such user
feedback are inherited by new models and amplified via feedback loops.
Moreover, the presence of false negative samples makes negative sampling
difficult and introduces spurious information in the user preference modeling
process of the model. Recent work has investigated the negative impact of
feedback loops and unknown exposure mechanisms on recommendation quality and
user experience, essentially treating them as independent factors and ignoring
their cross-effects. To address these issues, we deeply analyze the data
exposure mechanism from the perspective of data iteration and feedback loops
with the Missing Not At Random (\textbf{MNAR}) assumption, theoretically
demonstrating the existence of an available stabilization factor in the
transformation of the exposure mechanism under the feedback loops. We further
propose Dynamic Personalized Ranking (\textbf{DPR}), an unbiased algorithm that
uses dynamic re-weighting to mitigate the cross-effects of exposure mechanisms
and feedback loops without additional information. Furthermore, we design a
plugin named Universal Anti-False Negative (\textbf{UFN}) to mitigate the
negative impact of the false negative problem. We demonstrate theoretically
that our approach mitigates the negative effects of feedback loops and unknown
exposure mechanisms. Experimental results on real-world datasets demonstrate
that models using DPR can better handle bias accumulation and the universality
of UFN in mainstream loss methods
Modeling and Counteracting Exposure Bias in Recommender Systems
What we discover and see online, and consequently our opinions and decisions,
are becoming increasingly affected by automated machine learned predictions.
Similarly, the predictive accuracy of learning machines heavily depends on the
feedback data that we provide them. This mutual influence can lead to
closed-loop interactions that may cause unknown biases which can be exacerbated
after several iterations of machine learning predictions and user feedback.
Machine-caused biases risk leading to undesirable social effects ranging from
polarization to unfairness and filter bubbles.
In this paper, we study the bias inherent in widely used recommendation
strategies such as matrix factorization. Then we model the exposure that is
borne from the interaction between the user and the recommender system and
propose new debiasing strategies for these systems.
Finally, we try to mitigate the recommendation system bias by engineering
solutions for several state of the art recommender system models.
Our results show that recommender systems are biased and depend on the prior
exposure of the user. We also show that the studied bias iteratively decreases
diversity in the output recommendations. Our debiasing method demonstrates the
need for alternative recommendation strategies that take into account the
exposure process in order to reduce bias.
Our research findings show the importance of understanding the nature of and
dealing with bias in machine learning models such as recommender systems that
interact directly with humans, and are thus causing an increasing influence on
human discovery and decision makingComment: 9 figures and one table. The paper has 5 page
- …