64,504 research outputs found

    Evaluating collaborative filtering over time

    Get PDF
    Recommender systems have become essential tools for users to navigate the plethora of content in the online world. Collaborative filtering—a broad term referring to the use of a variety, or combination, of machine learning algorithms operating on user ratings—lies at the heart of recommender systems’ success. These algorithms have been traditionally studied from the point of view of how well they can predict users’ ratings and how precisely they rank content; state of the art approaches are continuously improved in these respects. However, a rift has grown between how filtering algorithms are investigated and how they will operate when deployed in real systems. Deployed systems will continuously be queried for personalised recommendations; in practice, this implies that system administrators will iteratively retrain their algorithms in order to include the latest ratings. Collaborative filtering research does not take this into account: algorithms are improved and compared to each other from a static viewpoint, while they will be ultimately deployed in a dynamic setting. Given this scenario, two new problems emerge: current filtering algorithms are neither (a) designed nor (b) evaluated as algorithms that must account for time. This thesis addresses the divergence between research and practice by examining how collaborative filtering algorithms behave over time. Our contributions include: 1. A fine grained analysis of temporal changes in rating data and user/item similarity graphs that clearly demonstrates how recommender system data is dynamic and constantly changing. 2. A novel methodology and time-based metrics for evaluating collaborative filtering over time, both in terms of accuracy and the diversity of top-N recommendations. 3. A set of hybrid algorithms that improve collaborative filtering in a range of different scenarios. These include temporal-switching algorithms that aim to promote either accuracy or diversity; parameter update methods to improve temporal accuracy; and re-ranking a subset of users’ recommendations in order to increase diversity. 4. A set of temporal monitors that secure collaborative filtering from a wide range of different temporal attacks by flagging anomalous rating patterns. We have implemented and extensively evaluated the above using large-scale sets of user ratings; we further discuss how this novel methodology provides insight into dimensions of recommender systems that were previously unexplored. We conclude that investigating collaborative filtering from a temporal perspective is not only more suitable to the context in which recommender systems are deployed, but also opens a number of future research opportunities

    How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility

    Full text link
    Recommendation systems are ubiquitous and impact many domains; they have the potential to influence product consumption, individuals' perceptions of the world, and life-altering decisions. These systems are often evaluated or trained with data from users already exposed to algorithmic recommendations; this creates a pernicious feedback loop. Using simulations, we demonstrate how using data confounded in this way homogenizes user behavior without increasing utility

    Benchmarking News Recommendations in a Living Lab

    Get PDF
    Most user-centric studies of information access systems in literature suffer from unrealistic settings or limited numbers of users who participate in the study. In order to address this issue, the idea of a living lab has been promoted. Living labs allow us to evaluate research hypotheses using a large number of users who satisfy their information need in a real context. In this paper, we introduce a living lab on news recommendation in real time. The living lab has first been organized as News Recommendation Challenge at ACM RecSys’13 and then as campaign-style evaluation lab NEWSREEL at CLEF’14. Within this lab, researchers were asked to provide news article recommendations to millions of users in real time. Different from user studies which have been performed in a laboratory, these users are following their own agenda. Consequently, laboratory bias on their behavior can be neglected. We outline the living lab scenario and the experimental setup of the two benchmarking events. We argue that the living lab can serve as reference point for the implementation of living labs for the evaluation of information access systems

    Improving Reachability and Navigability in Recommender Systems

    Full text link
    In this paper, we investigate recommender systems from a network perspective and investigate recommendation networks, where nodes are items (e.g., movies) and edges are constructed from top-N recommendations (e.g., related movies). In particular, we focus on evaluating the reachability and navigability of recommendation networks and investigate the following questions: (i) How well do recommendation networks support navigation and exploratory search? (ii) What is the influence of parameters, in particular different recommendation algorithms and the number of recommendations shown, on reachability and navigability? and (iii) How can reachability and navigability be improved in these networks? We tackle these questions by first evaluating the reachability of recommendation networks by investigating their structural properties. Second, we evaluate navigability by simulating three different models of information seeking scenarios. We find that with standard algorithms, recommender systems are not well suited to navigation and exploration and propose methods to modify recommendations to improve this. Our work extends from one-click-based evaluations of recommender systems towards multi-click analysis (i.e., sequences of dependent clicks) and presents a general, comprehensive approach to evaluating navigability of arbitrary recommendation networks

    The Method of Constructing Recommendations Online on the Temporal Dynamics of User Interests Using Multilayer Graph

    Get PDF
    The problem of the online construction of a rating list of objects in the recommender system is considered. A method for constructing recommendations online using the presentation of input data in the form of a multi-layer graph based on changes in user interests over time is proposed. The method is used for constructing recommendations in a situation with implicit feedback from the user. Input data are represented by a sequence of user choice records with a time stamp for each choice. The method includes the phases of pre-filtering of data and building recommendations by collaborative filtering of selected data. At pre-filtering of the input data, the subset of data is split into a sequence of fixed-length non-overlapping time intervals. Users with similar interests and records with objects of interest to these users are selected on a finite continuous subset of time intervals. In the second phase, the pre-filtered subset of data is used, which allows reducing the computational costs of generating recommendations. The method allows increasing the efficiency of building a rating list offered to the target user by taking into account changes in the interests of the user over time

    Trust-Networks in Recommender Systems

    Get PDF
    Similarity-based recommender systems suffer from significant limitations, such as data sparseness and scalability. The goal of this research is to improve recommender systems by incorporating the social concepts of trust and reputation. By introducing a trust model we can improve the quality and accuracy of the recommended items. Three trust-based recommendation strategies are presented and evaluated against the popular MovieLens [8] dataset

    Reducing offline evaluation bias of collaborative filtering algorithms

    Get PDF
    Recommendation systems have been integrated into the majority of large online systems to filter and rank information according to user profiles. It thus influences the way users interact with the system and, as a consequence, bias the evaluation of the performance of a recommendation algorithm computed using historical data (via offline evaluation). This paper presents a new application of a weighted offline evaluation to reduce this bias for collaborative filtering algorithms.Comment: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Apr 2015, Bruges, Belgium. pp.137-142, 2015, Proceedings of the 23-th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2015
    • …
    corecore