550 research outputs found
Deep Learning based Recommender System: A Survey and New Perspectives
With the ever-growing volume of online information, recommender systems have
been an effective strategy to overcome such information overload. The utility
of recommender systems cannot be overstated, given its widespread adoption in
many web applications, along with its potential impact to ameliorate many
problems related to over-choice. In recent years, deep learning has garnered
considerable interest in many research fields such as computer vision and
natural language processing, owing not only to stellar performance but also the
attractive property of learning feature representations from scratch. The
influence of deep learning is also pervasive, recently demonstrating its
effectiveness when applied to information retrieval and recommender systems
research. Evidently, the field of deep learning in recommender system is
flourishing. This article aims to provide a comprehensive review of recent
research efforts on deep learning based recommender systems. More concretely,
we provide and devise a taxonomy of deep learning based recommendation models,
along with providing a comprehensive summary of the state-of-the-art. Finally,
we expand on current trends and provide new perspectives pertaining to this new
exciting development of the field.Comment: The paper has been accepted by ACM Computing Surveys.
https://doi.acm.org/10.1145/328502
REVEALING AND ADDRESSING BIAS IN RECOMMENDER SYSTEMS
In the past decades, recommenders have achieved outstanding success in delivering personalized and accurate recommendations. Highly customized recommendations bring individuals great convenience, while helping content providers to connect with interested consumers accurately. However, they may also lead to undesirable outcomes, often through the introduction of bias in the training, deployment, and maintenance of recommender systems. For example, recommenders may impose unfair burdens on certain user groups, disadvantaging their prominence on job-based recommenders. And they may narrow down a user’s interest areas, raising concerns of echo chambers, fairness, and diversity. In this dissertation, we ground our work in identifying gaps in the literature for identifying and addressing bias in recommender systems, then introduce four approaches for mitigating biases from multiple perspectives, listed as below:
• First, we identify an inherent bias in many recommendation algorithms which optimize for the head (or popular portion) of the rating distribution, thus lead to large estimation errors for tail ratings. We conduct a data-driven investigation and theoretical analysis of the challenges posed by traditional latent factor models for estimating such tail ratings. With these challenges in mind, we propose a new multi-latent representation method designed specifically to estimate these tail ratings better, to reduce the bias associated with tail ratings in recommender systems.
• Second, we address another unexplored bias – the target customer distribution distortion. Traditional recommender systems typically aim to optimize an engagement metric without considering the overall distribution of target customers, thereby leading to serious distortion problems. Through a data-driven study, we reveal several distortions that arise from conventional recommenders. Toward overcoming these issues, we propose a target customer re-ranking algorithm to adjust the population distribution and composition in the Top-k target customers of an item while maintaining recommendation quality.
ii
• Third, we focus on mitigating the next unexplored bias – user’s taste distortion. We show how existing approaches assume a static view of user’s tastes, and so previously proposed calibrated recommenders result in poor modeling of the shift of a user’s evolution. Thus, we empirically identify the taste distortion problem through a data-driven study over multiple datasets. We propose a taste-enhanced calibrated recommender system designed with the shifts and trends of user’s taste preferences in mind, which results in improved taste distribution estimation and recommendation results.
• Last but not least, we study the distribution bias in a dynamic recommendation environment. Previous studies of such distribution-aware recommendation have focused exclusively on static scenarios, ignoring important challenges that manifest in real-world dynamics and leading to poor performance in practice. Hence, we present the first study of distribution bias in dynamic recommendations, and propose new methods to mitigate this bias even in the presence of feedback loops and other dynamics
New debiasing strategies in collaborative filtering recommender systems: modeling user conformity, multiple biases, and causality.
Recommender Systems are widely used to personalize the user experience in a diverse set of online applications ranging from e-commerce and education to social media and online entertainment. These State of the Art AI systems can suffer from several biases that may occur at different stages of the recommendation life-cycle. For instance, using biased data to train recommendation models may lead to several issues, such as the discrepancy between online and offline evaluation, decreasing the recommendation performance, and hurting the user experience. Bias can occur during the data collection stage where the data inherits the user-item interaction biases, such as selection and exposure bias. Bias can also occur in the training stage, where popular items tend to be recommended much more frequently given that they received more interactions to start with. The closed feedback loop nature of online recommender systems will further amplify the latter biases as well. In this dissertation, we study the bias in the context of Collaborative Filtering recommender system, and propose a new Popularity Correction Matrix Factorization (PCMF) that aims to improve the recommender system performance as well as decrease popularity bias and increase the diversity of items in the recommendation lists. PCMF mitigates popularity bias by disentangling relevance and conformity and by learning a user-personalized bias vector to capture the users\u27 individual conformity levels along a full spectrum of conformity bias. One shortcoming of the proposed PCMF debiasing approach, is its assumption that the recommender system is affected by only popularity bias. However in the real word, different types of bias do occur simultaneously and interact with one another. We therefore relax the latter assumption and propose a multi-pronged approach that can account for two biases simultaneously, namely popularity and exposure bias. our experimental results show that accounting for multiple biases does improve the results in terms of providing more accurate and less biased results. Finally, we propose a novel two-stage debiasing approach, inspired from the proximal causal inference framework. Unlike the existing causal IPS approach that corrects for observed confounders, our proposed approach corrects for both observed and potential unobserved confounders. The approach relies on a pair of negative control variables to adjust for the bias in the potential ratings. Our proposed approach outperforms state of the art causal approaches, proving that accounting for unobserved confounders can improve the recommendation system\u27s performance
Consumer-side Fairness in Recommender Systems: A Systematic Survey of Methods and Evaluation
In the current landscape of ever-increasing levels of digitalization, we are
facing major challenges pertaining to scalability. Recommender systems have
become irreplaceable both for helping users navigate the increasing amounts of
data and, conversely, aiding providers in marketing products to interested
users. The growing awareness of discrimination in machine learning methods has
recently motivated both academia and industry to research how fairness can be
ensured in recommender systems. For recommender systems, such issues are well
exemplified by occupation recommendation, where biases in historical data may
lead to recommender systems relating one gender to lower wages or to the
propagation of stereotypes. In particular, consumer-side fairness, which
focuses on mitigating discrimination experienced by users of recommender
systems, has seen a vast number of diverse approaches for addressing different
types of discrimination. The nature of said discrimination depends on the
setting and the applied fairness interpretation, of which there are many
variations. This survey serves as a systematic overview and discussion of the
current research on consumer-side fairness in recommender systems. To that end,
a novel taxonomy based on high-level fairness interpretation is proposed and
used to categorize the research and their proposed fairness evaluation metrics.
Finally, we highlight some suggestions for the future direction of the field.Comment: Draft submitted to Springer (November 2022
Modeling and counteracting exposure bias in recommender systems.
Recommender systems are becoming widely used in everyday life. They use machine learning algorithms which learn to predict our preferences and thus influence our choices among a staggering array of options online, such as movies, books, products, and even news articles. Thus what we discover and see online, and consequently our opinions and decisions, are becoming increasingly affected by automated predictions made by learning machines. Similarly, the predictive accuracy of these learning machines heavily depends on the feedback data, such as ratings and clicks, that we provide them. This mutual influence can lead to closed-loop interactions that may cause unknown biases which can be exacerbated after several iterations of machine learning predictions and user feedback. Such machine-caused biases risk leading to undesirable social effects such as polarization, unfairness, and filter bubbles. In this research, we aim to study the bias inherent in widely used recommendation strategies such as matrix factorization and its impact on the diversity of the recommendations. We also aim to develop probabilistic models of the bias that is borne from the interaction between the user and the recommender system and to develop debiasing strategies for these systems. We present a theoretical framework that can model the behavioral process of the user by considering item exposure before user interaction with the model. We also track diversity metrics to measure the bias that is generated in recommender systems, and thus study their effect throughout the iterations. Finally, we try to mitigate the recommendation system bias by engineering solutions for several state of the art recommender system models. Our results show that recommender systems are biased and depend on the prior exposure of the user. We also show that the studied bias iteratively decreases diversity in the output recommendations. Our debiasing method demonstrates the need for alternative recommendation strategies that take into account the exposure process in order to reduce bias. Our research findings show the importance of understanding the nature of and dealing with bias in machine learning models such as recommender systems that interact directly with humans, and are thus causing an increasing influence on human discovery and decision making
Personalized Video Recommendation Using Rich Contents from Videos
Video recommendation has become an essential way of helping people explore
the massive videos and discover the ones that may be of interest to them. In
the existing video recommender systems, the models make the recommendations
based on the user-video interactions and single specific content features. When
the specific content features are unavailable, the performance of the existing
models will seriously deteriorate. Inspired by the fact that rich contents
(e.g., text, audio, motion, and so on) exist in videos, in this paper, we
explore how to use these rich contents to overcome the limitations caused by
the unavailability of the specific ones. Specifically, we propose a novel
general framework that incorporates arbitrary single content feature with
user-video interactions, named as collaborative embedding regression (CER)
model, to make effective video recommendation in both in-matrix and
out-of-matrix scenarios. Our extensive experiments on two real-world
large-scale datasets show that CER beats the existing recommender models with
any single content feature and is more time efficient. In addition, we propose
a priority-based late fusion (PRI) method to gain the benefit brought by the
integrating the multiple content features. The corresponding experiment shows
that PRI brings real performance improvement to the baseline and outperforms
the existing fusion methods
Recent Developments in Recommender Systems: A Survey
In this technical survey, we comprehensively summarize the latest
advancements in the field of recommender systems. The objective of this study
is to provide an overview of the current state-of-the-art in the field and
highlight the latest trends in the development of recommender systems. The
study starts with a comprehensive summary of the main taxonomy of recommender
systems, including personalized and group recommender systems, and then delves
into the category of knowledge-based recommender systems. In addition, the
survey analyzes the robustness, data bias, and fairness issues in recommender
systems, summarizing the evaluation metrics used to assess the performance of
these systems. Finally, the study provides insights into the latest trends in
the development of recommender systems and highlights the new directions for
future research in the field
Fighting Fire with Fire: Using Antidote Data to Improve Polarization and Fairness of Recommender Systems
The increasing role of recommender systems in many aspects of society makes
it essential to consider how such systems may impact social good. Various
modifications to recommendation algorithms have been proposed to improve their
performance for specific socially relevant measures. However, previous
proposals are often not easily adapted to different measures, and they
generally require the ability to modify either existing system inputs, the
system's algorithm, or the system's outputs. As an alternative, in this paper
we introduce the idea of improving the social desirability of recommender
system outputs by adding more data to the input, an approach we view as
providing `antidote' data to the system. We formalize the antidote data
problem, and develop optimization-based solutions. We take as our model system
the matrix factorization approach to recommendation, and we propose a set of
measures to capture the polarization or fairness of recommendations. We then
show how to generate antidote data for each measure, pointing out a number of
computational efficiencies, and discuss the impact on overall system accuracy.
Our experiments show that a modest budget for antidote data can lead to
significant improvements in the polarization or fairness of recommendations.Comment: References to appendices are fixe
- …