3 research outputs found

    Estimating Error and Bias in Offline Evaluation Results

    Get PDF
    Offline evaluations of recommender systems attempt to estimate users’ satisfaction with recommendations using static data from prior user interactions. These evaluations provide researchers and developers with first approximations of the likely performance of a new system and help weed out bad ideas before presenting them to users. However, offline evaluation cannot accurately assess novel, relevant recommendations, because the most novel items were previously unknown to the user, so they are missing from the historical data and cannot be judged as relevant. We present a simulation study to estimate the error that such missing data causes in commonly-used evaluation metrics in order to assess its prevalence and impact. We find that missing data in the rating or observation process causes the evaluation protocol to systematically mis-estimate metric values, and in some cases erroneously determine that a popularity-based recommender outperforms even a perfect personalized recommender. Substantial breakthroughs in recommendation quality, therefore, will be difficult to assess with existing offline techniques

    Unifying Explicit and Implicit Feedback for Rating Prediction and Ranking Recommendation Tasks

    Get PDF
    The two main tasks addressed by collaborative filtering approaches are rating prediction and ranking. Rating prediction models leverage explicit feedback (e.g. ratings), and aim to estimate the rating a user would assign to an unseen item. In contrast, ranking models leverage implicit feedback (e.g. clicks) in order to provide the user with a personalized ranked list of recommended items. Several previous approaches have been proposed that learn from both explicit and implicit feedback to optimize the task of ranking or rating prediction at the level of recommendation algorithm. Yet we argue that these two tasks are not completely separate, but are part of a unified process: a user first interacts with a set of items and then might decide to provide explicit feedback on a subset of items. We propose to bridge the gap between the tasks of rating prediction and ranking through the use of a novel weak supervision approach that unifies both explicit and implicit feedback datasets. The key aspects of the proposed model is that (1) it is applied at the level of data pre-processing and (2) it increases the representation of less popular items in recommendations while maintaining reasonable recommendation performance. Our experimental results - on six datasets covering different types of heterogeneous user's interactions and using a wide range of evaluation metrics - show that, our proposed approach can effectively combine explicit and implicit feedback and improve the effectiveness of the baseline explicit model on the ranking task by covering a broader range of long-tail items
    corecore