7 research outputs found

    Embarrassingly Shallow Autoencoders for Sparse Data

    Full text link
    Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments.Comment: In the proceedings of the Web Conference (WWW) 2019 (7 pages

    Reducing Popularity Bias in Recommender Systems through AUC-Optimal Negative Sampling

    Full text link
    Popularity bias is a persistent issue associated with recommendation systems, posing challenges to both fairness and efficiency. Existing literature widely acknowledges that reducing popularity bias often requires sacrificing recommendation accuracy. In this paper, we challenge this commonly held belief. Our analysis under general bias-variance decomposition framework shows that reducing bias can actually lead to improved model performance under certain conditions. To achieve this win-win situation, we propose to intervene in model training through negative sampling thereby modifying model predictions. Specifically, we provide an optimal negative sampling rule that maximizes partial AUC to preserve the accuracy of any given model, while correcting sample information and prior information to reduce popularity bias in a flexible and principled way. Our experimental results on real-world datasets demonstrate the superiority of our approach in improving recommendation performance and reducing popularity bias.Comment: 20 page

    Modeling and debiasing feedback loops in collaborative filtering recommender systems.

    Get PDF
    Artificial Intelligence (AI)-driven recommender systems have been gaining increasing ubiquity and influence in our daily lives, especially during time spent online on the World Wide Web or smart devices. The influence of recommender systems on who and what we can find and discover, our choices, and our behavior, has thus never been more concrete. AI can now predict and anticipate, with varying degrees of accuracy, the news article we will read, the music we will listen to, the movies we will watch, the transactions we will make, the restaurants we will eat in, the online courses we will be interested in, and the people we will connect with for various ends and purposes. For all these reasons, the automated predictions and recommendations made by AI can lead to influencing and changing human opinions, behavior, and decision making. When the AI predictions are biased, the influences can have unfair consequences on society, ranging from social polarization to the amplification of misinformation and hate speech. For instance, bias in recommender systems can affect the decision making and shift consumer behavior in an unfair way due to a phenomenon known as the feedback loop. The feedback loop is an inherent component of recommender systems because the latter are dynamic systems that involve continuous interactions with the users, whereby data collected to train a recommender system model is usually affected by the outputs of a previously trained model. This feedback loop is expected to affect the performance of the system. For instance, it can amplify initial bias in the data or model and can lead to other phenomena such as filter bubbles, polarization, and popularity bias. Up to now, it has been difficult to understand the dynamics of recommender system feedback loops, and equally challenging to evaluate the bias and filter bubbles emerging from recommender system models within such an iterative closed loop environment. In this dissertation, we study the feedback loop in the context of Collaborative Filtering (CF) recommender systems. CF systems comprise the leading family of recommender systems that rely mainly on mining the patterns of interaction between the users and items to train models that aim to predict future user interactions. Our research contributions target three aspects of recommendation, namely modeling, debiasing and evaluating feedback loops. Our research advances the state of the art in Fairness in Artificial Intelligence on several fronts: (1) We propose and validate a new theoretical model, based on Martingale differences, to model the recommender system feedback loop, and allow a better understanding of the dynamics of filter bubbles and user discovery. (2) We propose a Transformer-based deep learning architecture and algorithm to learn diverse representations for users and items in order to increase the diversity in the recommendations. Our evaluation experiments on real world datasets demonstrate that our transformer model recommends 14\% more diverse items and improves the novelty of the recommendation by more than 20\%. (3) We propose a new simulation and experimentation framework that allows studying and tracking the evolution of bias metrics in a feedback loop setting, for a variety of recommendation modeling algorithms. Our preliminary findings, using the new simulation framework show that recommender systems are deeply affected by the feedback loop, and that without an adequate debiasing or exploration strategy, this feedback loop limits the discovery of the user and increases the disparity in exposure between items that can be recommended. To help the research and practice community in studying recommender system fairness, all the tools developed to model, debias, and evaluate recommender systems are made available to the public as open source software libraries \footnote{https://github.com/samikhenissi/TheoretUserModeling}. (4) We propose a novel learnable dynamic debiasing strategy that learns an optimal rescaling parameter for the predicted rating and achieves a better trade-off between accuracy and debiasing. We focus on solving the popularity bias of the items and test our method using our proposed simulation framework and show the effectiveness of using a learnable debiasing degree to produce better results

    Toward Responsible Recommender Systems

    Get PDF
    Recommender systems have become essential conduits: they can shape the media we consume, the jobs we seek, and even the friendships and professional contacts that form our social circles. With such a wide usage and impact, recommender systems can exert strong, but often unforeseen, and sometimes even detrimental influence on the social processes connected to culture, lifestyles, politics, education, ethics, economic well-being, and even social justice. Hence, in this dissertation research, we aim to identify, analyze, and alleviate potential risks and harms on users, item providers, the platforms, and ultimately the society, and to lay the foundation for new responsible recommender systems. In particular, we make three unique contributions toward responsible recommender systems: • First, we study how to counteract the exposure bias in user-item interaction data. To overcome the challenge that the user-item exposure information is hard to be estimated when aiming to produce unbiased recommendations, we develop a novel combinational joint learning framework to learn unbiased user-item relevance and unbiased user-item exposure information simultaneously. Then, we push the problem to an extreme where we aim to predict relevance for items with zero exposure in the interaction data. For this, we propose a neural network utilizing a randomized training mechanism and a Mixture-of-Experts Transformation structure. Experiments validate the effective performance by the proposed methods. • Second, we study what bias the machine learning based recommendation algorithms can bring and how to alleviate these bias. We uncover the popularity-opportunity bias on items and the mainstream bias on users. We conduct extensive data-driven study to show the existence of these bias in fundamental recommendation algorithms. Then, we explore and propose potential solutions to relieve these two types of bias, which empirically demonstrate outstanding performance for debiasing. • At last, we move our attention to the problem of how to measure and enhance fairness in recommendation results. We study the recommendation fairness in three different recommendation scenarios – the multi-dimension recommendation scenario, the personalized ranking recommendation scenario, and the cold-start recommendation scenario. With respect to different recommendation scenarios, we develop different algorithms to enhance the recommendation fairness. We also conduct extensive experiments to empirically show the effectiveness of the proposed solutions
    corecore