362 research outputs found

    A Distributed and Accountable Approach to Offline Recommender Systems Evaluation

    Get PDF
    Different software tools have been developed with the purpose of performing offline evaluations of recommender systems. However, the results obtained with these tools may be not directly comparable because of subtle differences in the experimental protocols and metrics. Furthermore, it is difficult to analyze in the same experimental conditions several algorithms without disclosing their implementation details. For these reasons, we introduce RecLab, an open source software for evaluating recommender systems in a distributed fashion. By relying on consolidated web protocols, we created RESTful APIs for training and querying recommenders remotely. In this way, it is possible to easily integrate into the same toolkit algorithms realized with different technologies. In details, the experimenter can perform an evaluation by simply visiting a web interface provided by RecLab. The framework will then interact with all the selected recommenders and it will compute and display a comprehensive set of measures, each representing a different metric. The results of all experiments are permanently stored and publicly available in order to support accountability and comparative analyses.Comment: REVEAL 2018 Workshop on Offline Evaluation for Recommender System

    Sequeval: A Framework to Assess and Benchmark Sequence-based Recommender Systems

    Get PDF
    In this paper, we present sequeval, a software tool capable of performing the offline evaluation of a recommender system designed to suggest a sequence of items. A sequence-based recommender is trained considering the sequences already available in the system and its purpose is to generate a personalized sequence starting from an initial seed. This tool automatically evaluates the sequence-based recommender considering a comprehensive set of eight different metrics adapted to the sequential scenario. sequeval has been developed following the best practices of software extensibility. For this reason, it is possible to easily integrate and evaluate novel recommendation techniques. sequeval is publicly available as an open source tool and it aims to become a focal point for the community to assess sequence-based recommender systems.Comment: REVEAL 2018 Workshop on Offline Evaluation for Recommender System

    Visualizing ratings in recommender system datasets

    Get PDF
    The numerical outcome of an offline experiment involving different recommender systems should be interpreted also considering the main characteristics of the available rating datasets. However, existing metrics usually exploited for comparing such datasets like sparsity and entropy are not enough informative for reliably understanding all their peculiarities. In this paper, we propose a qualitative approach for visualizing different collections of user ratings in an intuitive and comprehensible way, independently from a specific recommendation algorithm. Thanks to graphical summaries of the training data, it is possible to better understand the behaviour of different recommender systems exploiting a given dataset. Furthermore, we introduce RS-viz, a Web-based tool that implements the described method and that can easily create an interactive 3D scatter plot starting from any collection of user ratings. We compared the results obtained during an offline evaluation campaign with the corresponding visualizations generated from the HetRec LastFM dataset for validating the effectiveness of the proposed approach

    All you need is ratings: A clustering approach to synthetic rating datasets generation

    Get PDF
    The public availability of collections containing user preferences is of vital importance for performing offline evaluations in the field of recommender systems. However, the number of rating datasets is limited because of the costs required for their creation and the fear of violating the privacy of the users by sharing them. For this reason, numerous research attempts investigated the creation of synthetic collections of ratings using generative approaches. Nevertheless, these datasets are usually not reliable enough for conducting an evaluation campaign. In this paper, we propose a method for creating synthetic datasets with a configurable number of users that mimic the characteristics of already existing ones. We empirically validated the proposed approach by exploiting the synthetic datasets for evaluating different recommenders and by comparing the results with the ones obtained using real datasets

    Sequeval: an offline evaluation framework for sequence-based recommender systems

    Get PDF
    Recommender systems have gained a lot of popularity due to their large adoption in various industries such as entertainment and tourism. Numerous research efforts have focused on formulating and advancing state-of-the-art of systems that recommend the right set of items to the right person. However, these recommender systems are hard to compare since the published evaluation results are computed on diverse datasets and obtained using different methodologies. In this paper, we researched and prototyped an offline evaluation framework called Sequeval that is designed to evaluate recommender systems capable of suggesting sequences of items. We provide a mathematical definition of such sequence-based recommenders, a methodology for performing their evaluation, and the implementation details of eight metrics. We report the lessons learned using this framework for assessing the performance of four baselines and two recommender systems based on Conditional Random Fields (CRF) and Recurrent Neural Networks (RNN), considering two different datasets. Sequeval is publicly available and it aims to become a focal point for researchers and practitioners when experimenting with sequence-based recommender systems, providing comparable and objective evaluation results

    A distributed and accountable approach to offline recommender systems evaluation

    Get PDF
    Different software tools have been developed with the purpose of performing offline evaluations of recommender systems. However, the results obtained with these tools may be not directly comparable because of subtle differences in the experimental protocols and metrics. Furthermore, it is difficult to analyze in the same experimental conditions several algorithms without disclosing their implementation details. For these reasons, we introduce RecLab, an open source software for evaluating recommender systems in a distributed fashion. By relying on consolidated web protocols, we created RESTful APIs for training and querying recommenders remotely. In this way, it is possible to easily integrate into the same toolkit algorithms realized with different technologies. In details, the experimenter can perform an evaluation by simply visiting a web interface provided by RecLab. The framework will then interact with all the selected recommenders and it will compute and display a comprehensive set of measures, each representing a different metric. The results of all experiments are permanently stored and publicly available in order to support accountability and comparative analyses

    Mining micro-influencers from social media posts

    Get PDF
    Micro-influencers have triggered the interest of commercial brands, public administrations, and other stakeholders because of their demonstrated capability of sensitizing people within their close reach. However, due to their lower visibility in social media platforms, they are challenging to be identified. This work proposes an approach to automatically detect micro-influencers and to highlight their personality traits and community values by computationally analyzing their writings. We introduce two learning methods to retrieve Five Factor Model and Basic Human Values scores. These scores are then used as feature vectors of a Support Vector Machines classifier. We define a set of rules to create a micro-influencer gold standard dataset of more than two million tweets and we compare our approach with three baseline classifiers. The experimental results favor recall meaning that the approach is inclusive in the identification
    • …
    corecore