436 research outputs found

    Modeling and debiasing feedback loops in collaborative filtering recommender systems.

    Get PDF
    Artificial Intelligence (AI)-driven recommender systems have been gaining increasing ubiquity and influence in our daily lives, especially during time spent online on the World Wide Web or smart devices. The influence of recommender systems on who and what we can find and discover, our choices, and our behavior, has thus never been more concrete. AI can now predict and anticipate, with varying degrees of accuracy, the news article we will read, the music we will listen to, the movies we will watch, the transactions we will make, the restaurants we will eat in, the online courses we will be interested in, and the people we will connect with for various ends and purposes. For all these reasons, the automated predictions and recommendations made by AI can lead to influencing and changing human opinions, behavior, and decision making. When the AI predictions are biased, the influences can have unfair consequences on society, ranging from social polarization to the amplification of misinformation and hate speech. For instance, bias in recommender systems can affect the decision making and shift consumer behavior in an unfair way due to a phenomenon known as the feedback loop. The feedback loop is an inherent component of recommender systems because the latter are dynamic systems that involve continuous interactions with the users, whereby data collected to train a recommender system model is usually affected by the outputs of a previously trained model. This feedback loop is expected to affect the performance of the system. For instance, it can amplify initial bias in the data or model and can lead to other phenomena such as filter bubbles, polarization, and popularity bias. Up to now, it has been difficult to understand the dynamics of recommender system feedback loops, and equally challenging to evaluate the bias and filter bubbles emerging from recommender system models within such an iterative closed loop environment. In this dissertation, we study the feedback loop in the context of Collaborative Filtering (CF) recommender systems. CF systems comprise the leading family of recommender systems that rely mainly on mining the patterns of interaction between the users and items to train models that aim to predict future user interactions. Our research contributions target three aspects of recommendation, namely modeling, debiasing and evaluating feedback loops. Our research advances the state of the art in Fairness in Artificial Intelligence on several fronts: (1) We propose and validate a new theoretical model, based on Martingale differences, to model the recommender system feedback loop, and allow a better understanding of the dynamics of filter bubbles and user discovery. (2) We propose a Transformer-based deep learning architecture and algorithm to learn diverse representations for users and items in order to increase the diversity in the recommendations. Our evaluation experiments on real world datasets demonstrate that our transformer model recommends 14\% more diverse items and improves the novelty of the recommendation by more than 20\%. (3) We propose a new simulation and experimentation framework that allows studying and tracking the evolution of bias metrics in a feedback loop setting, for a variety of recommendation modeling algorithms. Our preliminary findings, using the new simulation framework show that recommender systems are deeply affected by the feedback loop, and that without an adequate debiasing or exploration strategy, this feedback loop limits the discovery of the user and increases the disparity in exposure between items that can be recommended. To help the research and practice community in studying recommender system fairness, all the tools developed to model, debias, and evaluate recommender systems are made available to the public as open source software libraries \footnote{https://github.com/samikhenissi/TheoretUserModeling}. (4) We propose a novel learnable dynamic debiasing strategy that learns an optimal rescaling parameter for the predicted rating and achieves a better trade-off between accuracy and debiasing. We focus on solving the popularity bias of the items and test our method using our proposed simulation framework and show the effectiveness of using a learnable debiasing degree to produce better results

    Beyond the Rating Matrix: Debiasing Implicit Feedback Loops in Collaborative Filtering

    Get PDF
    Implicit feedback collaborative filtering recommender systems suffer from exposure bias that corrupts performance and creates filter bubbles and echo chambers. Our study aims to provide a practical method that does not inherit any exposure bias from the data given the information about the user, the choice, and the choice set associated with each observation. We validated the model’s functionality and capability to reduce bias and compared it to baseline mitigation strategies by simulation. Our model inherited little to no bias, while the other approaches failed to mitigate all bias. To the best of our knowledge, we are first to identify a feasible approach to tackle exposure bias in recommender systems that does not require arbitrary parameter choices or large model extensions. With our findings, we encourage the recommender systems community to move away from rating-matrix-based towards discrete-choice-based models

    Fairness of Exposure in Dynamic Recommendation

    Full text link
    Exposure bias is a well-known issue in recommender systems where the exposure is not fairly distributed among items in the recommendation results. This is especially problematic when bias is amplified over time as a few items (e.g., popular ones) are repeatedly over-represented in recommendation lists and users' interactions with those items will amplify bias towards those items over time resulting in a feedback loop. This issue has been extensively studied in the literature in static recommendation environment where a single round of recommendation result is processed to improve the exposure fairness. However, less work has been done on addressing exposure bias in a dynamic recommendation setting where the system is operating over time, the recommendation model and the input data are dynamically updated with ongoing user feedback on recommended items at each round. In this paper, we study exposure bias in a dynamic recommendation setting. Our goal is to show that existing bias mitigation methods that are designed to operate in a static recommendation setting are unable to satisfy fairness of exposure for items in long run. In particular, we empirically study one of these methods and show that repeatedly applying this method fails to fairly distribute exposure among items in long run. To address this limitation, we show how this method can be adapted to effectively operate in a dynamic recommendation setting and achieve exposure fairness for items in long run. Experiments on a real-world dataset confirm that our solution is superior in achieving long-term exposure fairness for the items while maintaining the recommendation accuracy

    Tidying Up the Conversational Recommender Systems' Biases

    Full text link
    The growing popularity of language models has sparked interest in conversational recommender systems (CRS) within both industry and research circles. However, concerns regarding biases in these systems have emerged. While individual components of CRS have been subject to bias studies, a literature gap remains in understanding specific biases unique to CRS and how these biases may be amplified or reduced when integrated into complex CRS models. In this paper, we provide a concise review of biases in CRS by surveying recent literature. We examine the presence of biases throughout the system's pipeline and consider the challenges that arise from combining multiple models. Our study investigates biases in classic recommender systems and their relevance to CRS. Moreover, we address specific biases in CRS, considering variations with and without natural language understanding capabilities, along with biases related to dialogue systems and language models. Through our findings, we highlight the necessity of adopting a holistic perspective when dealing with biases in complex CRS models

    Towards Responsible Media Recommendation

    Get PDF
    Reading or viewing recommendations are a common feature on modern media sites. What is shown to consumers as recommendations is nowadays often automatically determined by AI algorithms, typically with the goal of helping consumers discover relevant content more easily. However, the highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles or amplifies the spread of misinformation. These well-documented phenomena create a need for improved mechanisms for responsible media recommendation, which avoid such negative effects of recommender systems. In this research note, we review the threats and challenges that may result from the use of automated media recommendation technology, and we outline possible steps to mitigate such undesired societal effects in the future.publishedVersio

    Popularity Bias as Ethical and Technical Issue in Recommendation: A Survey

    Get PDF
    Recommender Systems have become omnipresent in our ev- eryday life, helping us making decisions and navigating in the digital world full of information. However, only recently researchers have started discovering undesired and harmful effects of automated recommendation and began questioning how fair and ethical these systems are, while in- fluencing our day-to-day decision making, shaping our online behaviour and tastes. In the latest research works, various biases and phenomena like filter bubbles and echo chambers have been uncovered among the resulting effects of recommender systems and rigorous work has started on solving these issues. In this narrative survey, we investigate the emer- gence and progression of research on one of the potential types of biases in recommender systems, i.e. Popularity Bias. Many recommender al- gorithms have been shown to favor already popular items, hence giving them even more exposure, which can harm fairness and diversity on the platforms using such systems. Such a problem becomes even more com- plicated if the object of recommendation is not just products and content, but people, their work and services. This survey describes the progress in this field of study, highlighting the advancements and identifying the gaps in the research, where additional effort and attention is necessary to minimize the harmful effect and make sure that such systems are build in a fair and ethical way

    Understanding and Mitigating Multi-sided Exposure Bias in Recommender Systems

    Get PDF
    Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be crucial to optimize utilities not just for the end user, but also for other actors such as item sellers or producers who desire a fair representation of their items. Existing solutions do not properly address various aspects of multi-sided fairness in recommendations as they may either solely have one-sided view (i.e. improving the fairness only for one side), or do not appropriately measure the fairness for each actor involved in the system. In this thesis, I aim at first investigating the impact of unfair recommendations on the system and how these unfair recommendations can negatively affect major actors in the system. Then, I seek to propose solutions to tackle the unfairness of recommendations. I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to mitigate the exposure unfairness for items and suppliers in the recommendation lists. Also, as another solution, I propose a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results. I perform extensive experiments to evaluate the effectiveness of the proposed solutions. The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.Comment: Doctoral thesi

    Can Few Lines of Code Change Society ? Beyond fack-checking and moderation : how recommender systems toxifies social networking sites

    Full text link
    As the last few years have seen an increase in online hostility and polarization both, we need to move beyond the fack-checking reflex or the praise for better moderation on social networking sites (SNS) and investigate their impact on social structures and social cohesion. In particular, the role of recommender systems deployed at large scale by digital platforms such as Facebook or Twitter has been overlooked. This paper draws on the literature on cognitive science, digital media, and opinion dynamics to propose a faithful replica of the entanglement between recommender systems, opinion dynamics and users' cognitive biais on SNSs like Twitter that is calibrated over a large scale longitudinal database of tweets from political activists. This model makes it possible to compare the consequences of various recommendation algorithms on the social fabric and to quantify their interaction with some major cognitive bias. In particular, we demonstrate that the recommender systems that seek to solely maximize users' engagement necessarily lead to an overexposure of users to negative content (up to 300\% for some of them), a phenomenon called algorithmic negativity bias, to a polarization of the opinion landscape, and to a concentration of social power in the hands of the most toxic users. The latter are more than twice as numerous in the top 1\% of the most influential users than in the overall population. Overall, our findings highlight the urgency to identify harmful implementations of recommender systems to individuals and society in order better regulate their deployment on systemic SNSs
    corecore