355 research outputs found

    An explainable recommender system based on semantically-aware matrix factorization.

    Get PDF
    Collaborative Filtering techniques provide the ability to handle big and sparse data to predict the ratings for unseen items with high accuracy. Matrix factorization is an accurate collaborative filtering method used to predict user preferences. However, it is a black box system that recommends items to users without being able to explain why. This is due to the type of information these systems use to build models. Although rich in information, user ratings do not adequately satisfy the need for explanation in certain domains. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of big data and machine learning methods to a mass audience without a compromise in trust. Explanations can take a variety of formats, depending on the recommendation domain and the machine learning model used to make predictions. Semantic Web (SW) technologies have been exploited increasingly in recommender systems in recent years. The SW consists of knowledge graphs (KGs) providing valuable information that can help improve the performance of recommender systems. Yet KGs, have not been used to explain recommendations in black box systems. In this dissertation, we exploit the power of the SW to build new explainable recommender systems. We use the SW\u27s rich expressive power of linked data, along with structured information search and understanding tools to explain predictions. More specifically, we take advantage of semantic data to learn a semantically aware latent space of users and items in the matrix factorization model-learning process to build richer, explainable recommendation models. Our off-line and on-line evaluation experiments show that our approach achieves accurate prediction with the additional ability to explain recommendations, in comparison to baseline approaches. By fostering explainability, we hope that our work contributes to more transparent, ethical machine learning without sacrificing accuracy

    Deep Learning based Recommender System: A Survey and New Perspectives

    Full text link
    With the ever-growing volume of online information, recommender systems have been an effective strategy to overcome such information overload. The utility of recommender systems cannot be overstated, given its widespread adoption in many web applications, along with its potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. Evidently, the field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning based recommender systems. More concretely, we provide and devise a taxonomy of deep learning based recommendation models, along with providing a comprehensive summary of the state-of-the-art. Finally, we expand on current trends and provide new perspectives pertaining to this new exciting development of the field.Comment: The paper has been accepted by ACM Computing Surveys. https://doi.acm.org/10.1145/328502

    Beyond accuracy in machine learning.

    Get PDF
    Machine Learning (ML) algorithms are widely used in our daily lives. The need to increase the accuracy of ML models has led to building increasingly powerful and complex algorithms known as black-box models which do not provide any explanations about the reasons behind their output. On the other hand, there are white-box ML models which are inherently interpretable while having lower accuracy compared to black-box models. To have a productive and practical algorithmic decision system, precise predictions may not be sufficient. The system may need to have transparency and be able to provide explanations, especially in applications with safety-critical contexts such as medicine, aerospace, robotics, and self-driving vehicles; or in socially-sensitive domains such as credit scoring and predictive policing. This is because having transparency can help explain why a certain decision was made and this, in turn, could be useful in discovering possible biases that lead to discrimination against any individual or group of people. Fairness and bias are other aspects that need to be considered in evaluating ML models. Therefore, depending on the application domain, accuracy, explainability, and fairness from bias may be necessary in building a practical and effective algorithmic decision system. However, in practice, it is challenging to have a model that optimizes all of these three aspects simultaneously. In this work, we study ML criteria that go beyond accuracy in two different problems: 1) in collaborative filtering recommendation, where we study explainability and bias in addition to accuracy; and 2) in robotic grasp failure prediction, where we study explainability in addition to prediction accuracy

    New accurate, explainable, and unbiased machine learning models for recommendation with implicit feedback.

    Get PDF
    Recommender systems have become ubiquitous Artificial Intelligence (AI) tools that play an important role in filtering online information in our daily lives. Whether we are shopping, browsing movies, or listening to music online, AI recommender systems are working behind the scene to provide us with curated and personalized content, that has been predicted to be relevant to our interest. The increasing prevalence of recommender systems has challenged researchers to develop powerful algorithms that can deliver recommendations with increasing accuracy. In addition to the predictive accuracy of recommender systems, recent research has also started paying attention to their fairness, in particular with regard to the bias and transparency of their predictions. This dissertation contributes to advancing the state of the art in fairness in AI by proposing new Machine Learning models and algorithms that aim to improve the user\u27s experience when receiving recommendations, with a focus that is positioned at the nexus of three objectives, namely accuracy, transparency, and unbiasedness of the predictions. In our research, we focus on state-of-the-art Collaborative Filtering (CF) recommendation approaches trained on implicit feedback data. More specifically, we address the limitations of two established deep learning approaches in two distinct recommendation settings, namely recommendation with user profiles and sequential recommendation. First, we focus on a state of the art pairwise ranking model, namely Bayesian Personalized Ranking (BPR), which has been found to outperform pointwise models in predictive accuracy in the recommendation with the user profiles setting. Specifically, we address two limitations of BPR: (1) BPR is a black box model that does not explain its outputs, thus limiting the user\u27s trust in the recommendations, and the analyst\u27s ability to scrutinize a model\u27s outputs; and (2) BPR is vulnerable to exposure bias due to the data being Missing Not At Random (MNAR). This exposure bias usually translates into an unfairness against the least popular items because they risk being under-exposed by the recommender system. We propose a novel explainable loss function and a corresponding model called Explainable Bayesian Personalized Ranking (EBPR) that generates recommendations along with item-based explanations. Then, we theoretically quantify the additional exposure bias resulting from the explainability, and use it as a basis to propose an unbiased estimator for the ideal EBPR loss. This being done, we perform an empirical study on three real-world benchmarking datasets that demonstrate the advantages of our proposed models, compared to existing state of the art techniques. Next, we shift our attention to sequential recommendation systems and focus on modeling and mitigating exposure bias in BERT4Rec, which is a state-of-the-art recommendation approach based on bidirectional transformers. The bi-directional representation capacity in BERT4Rec is based on the Cloze task, a.k.a. Masked Language Model, which consists of predicting randomly masked items within the sequence, assuming that the true interacted item is the most relevant one. This results in an exposure bias, where non-interacted items with low exposure propensities are assumed to be irrelevant. Thus far, the most common approach to mitigating exposure bias in recommendation has been Inverse Propensity Scoring (IPS), which consists of down-weighting the interacted predictions in the loss function in proportion to their propensities of exposure, yielding a theoretically unbiased learning. We first argue and prove that IPS does not extend to sequential recommendation because it fails to account for the sequential nature of the problem. We then propose a novel propensity scoring mechanism, that we name Inverse Temporal Propensity Scoring (ITPS), which is used to theoretically debias the Cloze task in sequential recommendation. We also rely on the ITPS framework to propose a bidirectional transformer-based model called ITPS-BERT4Rec. Finally, we empirically demonstrate the debiasing capabilities of our proposed approach and its robustness to the severity of exposure bias. Our proposed explainable approach in recommendation with user profiles, EBPR, showed an increase in ranking accuracy of about 4% and an increase in explainability of about 7% over the baseline BPR model when performing experiments on real-world recommendation datasets. Moreover, experiments on a real-world unbiased dataset demonstrated the importance of coupling explainability and exposure debiasing in capturing the true preferences of the user with a significant improvement of 1% over the baseline unbiased model UBPR. Furthermore, coupling explainability with exposure debiasing was also empirically proven to mitigate popularity bias with an improvement in popularity debiasing metrics of over 10% on three real-world recommendation tasks over the unbiased UBPR model. These results demonstrate the viability of our proposed approaches in recommendation with user profiles and their capacity to improve the user\u27s experience in recommendation by better capturing and modeling their true preferences, improving the explainability of the recommendations, and presenting them with more diverse recommendations that span a larger portion of the item catalog. On the other hand, our proposed approach in sequential recommendation ITPS-BERT4Rec has demonstrated a significant increase of 1% in terms of modeling the true preferences of the user in a semi-synthetic setting over the state-of-the-art sequential recommendation model BERT4Rec while also being unbiased in terms of exposure. Similarly, ITPS-BERT4Rec showed an average increase of 8.7% over BERT4Rec in three real-world recommendation settings. Moreover, empirical experiments demonstrated the robustness of our proposed ITPS-BERT4Rec model to increasing levels of exposure bias and its stability in terms of variance. Furthermore, experiments on popularity debiasing showed a significant advantage of our proposed ITPS-BERT4Rec model for both the short and long term sequences. Finally, ITPS-BERT4Rec showed respective improvements of around 60%, 470%, and 150% over vanilla BERT4Rec in capturing the temporal dependencies between the items within the sequences of interactions for three different evaluation metrics. These results demonstrate the potential of our proposed unbiased estimator to improve the user experience in the context of sequential recommendation by presenting them with more accurate and diverse recommendations that better match their true preferences and the sequential dependencies between the recommended items

    Multi-view Latent Factor Models for Recommender Systems

    Get PDF
    corecore