391 research outputs found

    Degenerate Feedback Loops in Recommender Systems

    Full text link
    Machine learning is used extensively in recommender systems deployed in products. The decisions made by these systems can influence user beliefs and preferences which in turn affect the feedback the learning system receives - thus creating a feedback loop. This phenomenon can give rise to the so-called "echo chambers" or "filter bubbles" that have user and societal implications. In this paper, we provide a novel theoretical analysis that examines both the role of user dynamics and the behavior of recommender systems, disentangling the echo chamber from the filter bubble effect. In addition, we offer practical solutions to slow down system degeneracy. Our study contributes toward understanding and developing solutions to commonly cited issues in the complex temporal scenario, an area that is still largely unexplored

    Algorithmic abstractions of ‘fashion identity’ and the role of privacy with regard to algorithmic personalisation systems in the fashion domain

    Get PDF
    This paper delves into the nuances of ‘fashion’ in recommender systems and social media analytics, which shape and define an individual’s perception and self-relationality. Its aim is twofold: first, it supports a different perspective on privacy that focuses on the individual’s process of identity construction considering the social and personal aspects of ‘fashion’. Second, it underlines the limitations of computational models in capturing the diverse meaning of ‘fashion’, whereby the algorithmic prediction of user preferences is based on individual conscious and unconscious associations with fashion identity. I test both of these claims in the context of current concerns over the impact of algorithmic personalisation systems on individual autonomy and privacy: creating ‘filter bubbles’, nudging the user beyond their conscious awareness, as well as the inherent bias in algorithmic decision-making. We need an understanding of privacy that sustains the inherent reduction of fashion identity to literal attributes and protects individual autonomy in shaping algorithmic approximations of the self

    Filter Bubbles in Recommender Systems: Fact or Fallacy -- A Systematic Review

    Full text link
    A filter bubble refers to the phenomenon where Internet customization effectively isolates individuals from diverse opinions or materials, resulting in their exposure to only a select set of content. This can lead to the reinforcement of existing attitudes, beliefs, or conditions. In this study, our primary focus is to investigate the impact of filter bubbles in recommender systems. This pioneering research aims to uncover the reasons behind this problem, explore potential solutions, and propose an integrated tool to help users avoid filter bubbles in recommender systems. To achieve this objective, we conduct a systematic literature review on the topic of filter bubbles in recommender systems. The reviewed articles are carefully analyzed and classified, providing valuable insights that inform the development of an integrated approach. Notably, our review reveals evidence of filter bubbles in recommendation systems, highlighting several biases that contribute to their existence. Moreover, we propose mechanisms to mitigate the impact of filter bubbles and demonstrate that incorporating diversity into recommendations can potentially help alleviate this issue. The findings of this timely review will serve as a benchmark for researchers working in interdisciplinary fields such as privacy, artificial intelligence ethics, and recommendation systems. Furthermore, it will open new avenues for future research in related domains, prompting further exploration and advancement in this critical area.Comment: 21 pages, 10 figures and 5 table

    Popularity Bias as Ethical and Technical Issue in Recommendation: A Survey

    Get PDF
    Recommender Systems have become omnipresent in our ev- eryday life, helping us making decisions and navigating in the digital world full of information. However, only recently researchers have started discovering undesired and harmful effects of automated recommendation and began questioning how fair and ethical these systems are, while in- fluencing our day-to-day decision making, shaping our online behaviour and tastes. In the latest research works, various biases and phenomena like filter bubbles and echo chambers have been uncovered among the resulting effects of recommender systems and rigorous work has started on solving these issues. In this narrative survey, we investigate the emer- gence and progression of research on one of the potential types of biases in recommender systems, i.e. Popularity Bias. Many recommender al- gorithms have been shown to favor already popular items, hence giving them even more exposure, which can harm fairness and diversity on the platforms using such systems. Such a problem becomes even more com- plicated if the object of recommendation is not just products and content, but people, their work and services. This survey describes the progress in this field of study, highlighting the advancements and identifying the gaps in the research, where additional effort and attention is necessary to minimize the harmful effect and make sure that such systems are build in a fair and ethical way

    Towards Responsible Media Recommendation

    Get PDF
    Reading or viewing recommendations are a common feature on modern media sites. What is shown to consumers as recommendations is nowadays often automatically determined by AI algorithms, typically with the goal of helping consumers discover relevant content more easily. However, the highlighting or filtering of information that comes with such recommendations may lead to undesired effects on consumers or even society, for example, when an algorithm leads to the creation of filter bubbles or amplifies the spread of misinformation. These well-documented phenomena create a need for improved mechanisms for responsible media recommendation, which avoid such negative effects of recommender systems. In this research note, we review the threats and challenges that may result from the use of automated media recommendation technology, and we outline possible steps to mitigate such undesired societal effects in the future.publishedVersio

    Quantifying Biases in Online Information Exposure

    Full text link
    Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this paper, we mine a massive dataset of Web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used Web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."Comment: 25 pages, 10 figures, to appear in the Journal of the Association for Information Science and Technology (JASIST

    A SYSTEMATIC REVIEW OF COMPUTATIONAL METHODS IN AND RESEARCH TAXONOMY OF HOMOPHILY IN INFORMATION SYSTEMS

    Get PDF
    Homophily is both a principle for social group formation with like-minded people as well as a mechanism for social interactions. Recent years have seen a growing body of management research on homophily particularly on large-scale social media and digital platforms. However, the predominant traditional qualitative and quantitative methods employed face validity issues and/or are not well-suited for big social data. There are scant guidelines for applying computational methods to specific research domains concerning descriptive patterns, explanatory mechanisms, or predictive indicators of homophily. To fill this research gap, this paper offers a structured review of the emerging literature on computational social science approaches to homophily with a particular emphasis on their relevance, appropriateness, and importance to information systems research. We derive a research taxonomy for homophily and offer methodological reflections and recommendations to help inform future research
    • …
    corecore