523,346 research outputs found

    SNA-Based Recommendation in Professional Learning Environments

    Get PDF
    Recommender systems can provide effective means to support self-organization and networking in professional learning environments. In this paper, we leverage social network analysis (SNA) methods to improve interest-based recommendation in professional learning networks. We discuss two approaches for interest-based recommendation using SNA and compare them with conventional collaborative filtering (CF)-based recommendation methods. The user evaluation results based on the ResQue framework confirm that SNA-based CF recommendation outperform traditional CF methods in terms of coverage and thus can provide an effective solution to the sparsity and cold start problems in recommender systems

    Comparative recommender system evaluation: Benchmarking recommendation frameworks

    Full text link
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in RecSys '14 Proceedings of the 8th ACM Conference on Recommender systems, http://dx.doi.org/10.1145/2645710.2645746Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.This work was partly carried out during the tenure of an ERCIM “Alain Bensoussan” Fellowship Programme. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreements n◦246016 and n◦610594, and the Spanish Ministry of Science and Innovation (TIN2013-47090-C3-2

    The Use of Clustering Methods in Memory-Based Collaborative Filtering for Ranking-Based Recommendation Systems

    Get PDF
    This research explores the application of clustering techniques and frequency normalization in collaborative filtering to enhance the performance of ranking-based recommendation systems. Collaborative filtering is a popular approach in recommendation systems that relies on user-item interaction data. In ranking-based recommendation systems, the goal is to provide users with a personalized list of items, sorted by their predicted relevance. In this study, we propose a novel approach that combines clustering and frequency normalization techniques. Clustering, in the context of data analysis, is a technique used to organize and group together users or items that share similar characteristics or features. This method proves beneficial in enhancing recommendation accuracy by uncovering hidden patterns within the data. Additionally, frequency normalization is utilized to mitigate potential biases in user-item interaction data, ensuring fair and unbiased recommendations. The research methodology involves data preprocessing, clustering algorithm selection, frequency normalization techniques, and evaluation metrics. Experimental results demonstrate that the proposed method outperforms traditional collaborative filtering approaches in terms of ranking accuracy and recommendation quality. This approach has the potential to enhance recommendation systems across various domains, including e-commerce, content recommendation, and personalized advertising

    How to Perform Reproducible Experiments in the ELLIOT Recommendation Framework: Data Processing, Model Selection, and Performance Evaluation

    Full text link
    Recommender Systems have shown to be an efective way to alleviate the over-choice problem and provide accurate and tailored recommendations. However, the impressive number of proposed recommendation algorithms, splitting strategies, evaluation protocols, metrics, and tasks, has made rigorous experimental evaluation particularly challenging. ELLIOT is a comprehensive recommendation framework that aims to run and reproduce an entire experimental pipeline by processing a simple confguration fle. The framework loads, flters, and splits the data considering a vast set of strategies. Then, it optimizes hyperparameters for several recommendation algorithms, selects the best models, compares them with the baselines, computes metrics spanning from accuracy to beyond-accuracy, bias, and fairness, and conducts statistical analysis. The aim is to provide researchers a tool to ease all the experimental evaluation phases (and make them reproducible), from data reading to results collection. ELLIOT is freely available on GitHub at https://github.com/sisinflab/ellio

    Using citation-context to reduce topic drifting on pure citation-based recommendation

    Get PDF
    Recent works in the area of academic recommender systems have demonstrated the effectiveness of co-citation and citation closeness in related-document recommendations. However, documents recommended from such systems may drift away from the main theme of the query document. In this work, we investigate whether incorporating the textual information in close proximity to a citation as well as the citation position could reduce such drifting and further increase the performance of the recommender system. To investigate this, we run experiments with several recommendation methods on a newly created and now publicly available dataset containing 53 million unique citation-based records. We then conduct a user-based evaluation with domain-knowledgeable participants. Our results show that a new method based on the combination of Citation Proximity Analysis (CPA), topic modelling and word embeddings achieves more than 20% improvement in Normalised Discounted Cumulative Gain (nDCG) compared to CPA

    #REVAL: a semantic evaluation framework for hashtag recommendation

    Full text link
    Automatic evaluation of hashtag recommendation models is a fundamental task in many online social network systems. In the traditional evaluation method, the recommended hashtags from an algorithm are firstly compared with the ground truth hashtags for exact correspondences. The number of exact matches is then used to calculate the hit rate, hit ratio, precision, recall, or F1-score. This way of evaluating hashtag similarities is inadequate as it ignores the semantic correlation between the recommended and ground truth hashtags. To tackle this problem, we propose a novel semantic evaluation framework for hashtag recommendation, called #REval. This framework includes an internal module referred to as BERTag, which automatically learns the hashtag embeddings. We investigate on how the #REval framework performs under different word embedding methods and different numbers of synonyms and hashtags in the recommendation using our proposed #REval-hit-ratio measure. Our experiments of the proposed framework on three large datasets show that #REval gave more meaningful hashtag synonyms for hashtag recommendation evaluation. Our analysis also highlights the sensitivity of the framework to the word embedding technique, with #REval based on BERTag more superior over #REval based on FastText and Word2Vec.Comment: 18 pages, 4 figure

    Freshness-Aware Thompson Sampling

    Full text link
    To follow the dynamicity of the user's content, researchers have recently started to model interactions between users and the Context-Aware Recommender Systems (CARS) as a bandit problem where the system needs to deal with exploration and exploitation dilemma. In this sense, we propose to study the freshness of the user's content in CARS through the bandit problem. We introduce in this paper an algorithm named Freshness-Aware Thompson Sampling (FA-TS) that manages the recommendation of fresh document according to the user's risk of the situation. The intensive evaluation and the detailed analysis of the experimental results reveals several important discoveries in the exploration/exploitation (exr/exp) behaviour.Comment: 21st International Conference on Neural Information Processing. arXiv admin note: text overlap with arXiv:1409.772
    • …
    corecore