696 research outputs found

    Extended Content-boosted Matrix Factorization Algorithm for Recommender Systems

    Get PDF
    AbstractRecommender technologies have been developed to give helpful predictions for decision making under uncertainty. An extensive amount of research has been done to increase the quality of such predictions, currently the methods based on matrix factorization are recognized as one of the most efficient.The focus of this paper is to extend a matrix factorization algorithm with content awareness to increase prediction accuracy. A recommender system prototype based on the resulting Extended Content-Boosted Matrix Factorization Algorithm is designed, developed and evaluated. The algorithm has been evaluated by empirical evaluation, which starts with creating of an experimental design, then conducting off-line empirical tests with accuracy measurement.The result revealed further potential of the content awareness in matrix factorization methods, which has not been fully realized in the generalized alignment-biased algorithm by Nguyen and Zhu and uncovers opportunities for future research

    Multi-style explainable matrix factorization techniques for recommender systems.

    Get PDF
    Black-box recommender system models are machine learning models that generate personalized recommendations without explaining how the recommendations were generated to the user or giving them a way to correct wrong assumptions made about them by the model. However, compared to white-box models, which are transparent and scrutable, black-box models are generally more accurate. Recent research has shown that accuracy alone is not sufficient for user satisfaction. One such black-box model is Matrix Factorization, a State of the Art recommendation technique that is widely used due to its ability to deal with sparse data sets and to produce accurate recommendations. Recent work has proposed new Matrix Factorization models that are explainable by incorporating explanations derived from semantic knowledge graphs, user neighborhood, or item neighborhood graphs into the model learning process. These Explainable Matrix Factorization (EMF) methods have the benefit of providing explanations without sacrificing accuracy. However, their explanations tend to be limited to only one explanation style. In this dissertation, we propose a framework comprising new machine learning methods to build explainable models that can make recommendations with multiple explanation-styles, by hybridizing multiple EMF models and by proposing new EMF models that explain recommendations using tags. The various pre-calculated explainability scores, leveraged in our proposed methods, have all been validated in prior work that conducted user studies to evaluate users’ satisfaction with each style individually. Unlike most existing work that generates explanations post-hoc, i.e., after the predictions have already been made, our framework is based on calculating explainability scores directly from available data, before the model is learned, and then using them as part of a regularization mechanism, to guide the model learning. Unlike post-hoc methods, our framework makes it possible to learn machine learning models that take into account the explanation scores, therefore ensuring higher transparency. Our evaluation experiments show that our proposed methods provide accurate recommendations while also providing users with multiple styles of explanations about how data was used to generate each recommendation. Each explanation style also provides additional decision-making information that empowers the user to either trust or scrutinize the recommendations. Although, rooted in the hybrid recommendation framework, our proposed methods make a significant step forward in explainable AI and beyond existing hybrid frameworks, because the proposed hybridization mechanisms make an intentional effort to take into account the individual models’ explanations and not only their output predicted ratings

    Recommender Systems

    Get PDF
    The ongoing rapid expansion of the Internet greatly increases the necessity of effective recommender systems for filtering the abundant information. Extensive research for recommender systems is conducted by a broad range of communities including social and computer scientists, physicists, and interdisciplinary researchers. Despite substantial theoretical and practical achievements, unification and comparison of different approaches are lacking, which impedes further advances. In this article, we review recent developments in recommender systems and discuss the major challenges. We compare and evaluate available algorithms and examine their roles in the future developments. In addition to algorithms, physical aspects are described to illustrate macroscopic behavior of recommender systems. Potential impacts and future directions are discussed. We emphasize that recommendation has a great scientific depth and combines diverse research fields which makes it of interests for physicists as well as interdisciplinary researchers.Comment: 97 pages, 20 figures (To appear in Physics Reports

    Effective Matrix Factorization for Online Rating Prediction

    Get PDF
    Recommender systems have been widely utilized by online merchants and online advertisers to promote their products in order to improve profits. By evaluating customer interests based on their purchase history and relating it to commodities for sale these retailers could excavate out products which are most likely to be chosen by a specific customer. In this case, online ratings given by customers are of great interest as they could reflect different levels of customers’ interest on different products. Collaborative Filtering (CF) approach is chosen by a large amount of web-based retailers for their recommender systems because CF operates on interactions between customers and products. In this paper, a major approach of CF, Matrix Factorization, is modified to give more accurate recommendations by predicting online ratings

    Statistical Significance of the Netflix Challenge

    Full text link
    Inspired by the legacy of the Netflix contest, we provide an overview of what has been learned---from our own efforts, and those of others---concerning the problems of collaborative filtering and recommender systems. The data set consists of about 100 million movie ratings (from 1 to 5 stars) involving some 480 thousand users and some 18 thousand movies; the associated ratings matrix is about 99% sparse. The goal is to predict ratings that users will give to movies; systems which can do this accurately have significant commercial applications, particularly on the world wide web. We discuss, in some detail, approaches to "baseline" modeling, singular value decomposition (SVD), as well as kNN (nearest neighbor) and neural network models; temporal effects, cross-validation issues, ensemble methods and other considerations are discussed as well. We compare existing models in a search for new models, and also discuss the mission-critical issues of penalization and parameter shrinkage which arise when the dimensions of a parameter space reaches into the millions. Although much work on such problems has been carried out by the computer science and machine learning communities, our goal here is to address a statistical audience, and to provide a primarily statistical treatment of the lessons that have been learned from this remarkable set of data.Comment: Published in at http://dx.doi.org/10.1214/11-STS368 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Predictive Accuracy of Recommender Algorithms

    Get PDF
    Recommender systems present a customized list of items based upon user or item characteristics with the objective of reducing a large number of possible choices to a smaller ranked set most likely to appeal to the user. A variety of algorithms for recommender systems have been developed and refined including applications of deep learning neural networks. Recent research reports point to a need to perform carefully controlled experiments to gain insights about the relative accuracy of different recommender algorithms, because studies evaluating different methods have not used a common set of benchmark data sets, baseline models, and evaluation metrics. The dissertation used publicly available sources of ratings data with a suite of three conventional recommender algorithms and two deep learning (DL) algorithms in controlled experiments to assess their comparative accuracy. Results for the non-DL algorithms conformed well to published results and benchmarks. The two DL algorithms did not perform as well and illuminated known challenges implementing DL recommender algorithms as reported in the literature. Model overfitting is discussed as a potential explanation for the weaker performance of the DL algorithms and several regularization strategies are reviewed as possible approaches to improve predictive error. Findings justify the need for further research in the use of deep learning models for recommender systems

    Exploiting distributional semantics for content-based and context-aware recommendation

    Get PDF
    During the last decade, the use of recommender systems has been increasingly growing to the point that, nowadays, the success of many well-known services depends on these technologies. Recommenders Systems help people to tackle the choice overload problem by effectively presenting new content adapted to the user¿s preferences. However, current recommendation algorithms commonly suffer from data sparsity, which refers to the incapability of producing acceptable recommendations until a minimum amount of users¿ ratings are available for training the prediction models. This thesis investigates how the distributional semantics of concepts describing the entities of the recommendation space can be exploited to mitigate the data-sparsity problem and improve the prediction accuracy with respect to state-of-the-art recommendation techniques. The fundamental idea behind distributional semantics is that concepts repeatedly co-occurring in the same context or usage tend to be related. In this thesis, we propose and evaluate two novel semantically-enhanced prediction models that address the sparsity-related limitations: (1) a content-based approach, which exploits the distributional semantics of item¿s attributes during item and user-profile matching, and (2) a context-aware recommendation approach that exploits the distributional semantics of contextual conditions during context modeling. We demonstrate in an exhaustive experimental evaluation that the proposed algorithms outperform state-of-the-art ones, especially when data are sparse. Finally, this thesis presents a recommendation framework, which extends the widespread machine learning library Apache Mahout, including all the proposed and evaluated recommendation algorithms as well as a tool for offline evaluation and meta-parameter optimization. The framework has been developed to allow other researchers to reproduce the described evaluation experiments and make new progress on the Recommender Systems field easierDurant l'última dècada, l'ús dels sistemes de recomanació s'ha vist incrementat fins al punt que, actualment, l'èxit de molts dels serveis web més coneguts depèn en aquesta tecnologia. Els Sistemes de Recomanació ajuden als usuaris a trobar els productes o serveis que més s¿adeqüen als seus interessos i preferències. Una gran limitació dels algoritmes de recomanació actuals és el problema de "data-sparsity", que es refereix a la incapacitat d'aquests sistemes de generar recomanacions precises fins que un cert nombre de votacions d'usuari és disponible per entrenar els models de predicció. Per mitigar aquest problema i millorar així la precisió de predicció de les tècniques de recomanació que conformen l'estat de l'art, en aquesta tesi hem investigat diferents maneres d'aprofitar la semàntica distribucional dels conceptes que descriuen les entitats que conformen l'espai del problema de la recomanació, principalment, els objectes a recomanar i la informació contextual. En la semàntica distribucional s'assumeix la següent hipotesi: conceptes que coincideixen repetidament en el mateix context o ús tendeixen a estar semànticament relacionats. Concretament, en aquesta tesi hem proposat i avaluat dos algoritmes de recomanació que fan ús de la semàntica distribucional per mitigar el problem de "data-sparsity": (1) un model basat en contingut que explota les similituds distribucionals dels atributs que representen els objectes a recomanar durant el càlcul de la correspondència entre els perfils d'usuari i dels objectes; (2) un model de recomanació contextual que fa ús de les similituds distribucionals entre condicions contextuals durant la representació del context. Mitjançant una avaluació experimental exhaustiva dels models de recomanació proposats hem demostrat la seva efectivitat en situacions de falta de dades, confirmant que poden millorar la precisió d'algoritmes que conformen l'estat de l'art. Finalment, aquesta tesi presenta una llibreria pel desenvolupament i avaluació d'algoritmes de recomanació com una extensió de la llibreria de "Machine Learning" Apache Mahout, àmpliament utilitzada en el camp del Machine Learning. La nostra extensió inclou tots els algoritmes de recomanació avaluats en aquesta tesi, així com una eina per facilitar l'avaluació experimental dels algoritmes. Hem desenvolupat aquesta llibreria per facilitar a altres investigadors la reproducció dels experiments realitzats i, per tant, el progrés en el camp dels Sistemes de Recomanació

    When autoencoders meet recommender systems : COFILS approach

    Get PDF
    Collaborative Filtering to Supervised Learning (COFILS) transforms a Collaborative Filtering (CF) problem into classical Supervised Learning (SL) problem. Applying COFILS reduce data sparsity and make it possible to test a variety of SL algorithms rather than matrix decomposition methods. It main steps are: extraction, mapping and prediction. Firstly, a Singular Value Decomposition (SVD) generates a set of latent variables from a ratings matrix. Next, on the mapping phase, a new data set is generated where each sample contains a set of latent variables from an user and it rated item; and a target that corresponds the user rating for that item. Finally, on the last phase, a SL algorithm is applied. One problem of COFILS is it’s dependency on SVD, that is not able to extract non-linear features from data and it is not robust to noisy data. To address this problem, we propose switching SVD to a Stacked Denoising Autoencoder (SDA) on the first phase of COFILS. With SDA, more useful and complex representations can be learned in a Deep Network with a local denoising criterion. We test our novel technique, namely Deep Learning COFILS (DL-COFILS), on MovieLens, R3 Yahoo! Music and Movie Tweetings data sets and compare to COFILS, as a baseline, and state of the art CF techniques. Our results indicate that DL-COFILS outperforms COFILS for all the data sets and with an improvement up to 5.9%. Also, DL-COFILS achieves the best result for the MovieLens 100k data set and ranks on the top three algorithms for these data sets. Thus, we show that DL-COFILS represents an advance on COFILS methodology, improving it’s results and that is a suitable method for CF problem.Collaborative Filtering to Supervised Learning (COFILS) transforma um problema de filtragem colaborativa (CF) em um problema clássico de aprendizado supervisionado (SL). Sua aplicação reduz a esparsidade e torna possível a utilização de variados algoritmos de SL em oposição aos métodos de decomposição de matrizes. Primeiramente, a Decomposição em Valores Singulares (SVD) gera um conjunto de variáveis latentes a partir da matriz de avaliações. Na fase de mapeamento, um novo conjunto de dados é gerado, do qual cada amostra contém um conjunto de variáveis latentes de um usuário e do item avaliado; e um valor que corresponde a avaliação que o usuário atribuiu a esse item. Por fim, o algoritmo de SL é aplicado. Um ponto negativo do COFILS é sua dependência ao SVD, incapaz de extrair características não-lineares e sem robustez `a dados ruidosos. Nesse caso, propomos a troca do SVD por um Stacked Denoising Autoencoder (SDA). Com o uso de um SDA, representações mais úteis e complexas podem ser aprendidas em uma rede neural profunda com um critério local de remoção de ruído. Executamos nossa técnica, chamada Deep Learning COFILS (DL-COFILS), nos conjuntos de dados MovieLens, R3 Yahoo! Music e Movie Tweetings comparando os resultados com o COFILS padrão, como baseline, e demais técnicas de estado da arte de CF. Com os resultados obtidos, é possível mencionar que DL-COFILS supera COFILS para todos os conjuntos de dados, com uma melhora de até 5.9%. Além disso, o DLCOFILS alcança o melhor resultado para o MovieLens 100k e se encontra entre os três melhores algoritmos nos demais conjuntos de dados. Dessa forma, mostraremos que DL-COFILS representa um avanço na metodologia COFILS, melhorando seus resultados e se mostrando um método adequado para CF
    corecore