373 research outputs found

    Multi-style explainable matrix factorization techniques for recommender systems.

    Get PDF
    Black-box recommender system models are machine learning models that generate personalized recommendations without explaining how the recommendations were generated to the user or giving them a way to correct wrong assumptions made about them by the model. However, compared to white-box models, which are transparent and scrutable, black-box models are generally more accurate. Recent research has shown that accuracy alone is not sufficient for user satisfaction. One such black-box model is Matrix Factorization, a State of the Art recommendation technique that is widely used due to its ability to deal with sparse data sets and to produce accurate recommendations. Recent work has proposed new Matrix Factorization models that are explainable by incorporating explanations derived from semantic knowledge graphs, user neighborhood, or item neighborhood graphs into the model learning process. These Explainable Matrix Factorization (EMF) methods have the benefit of providing explanations without sacrificing accuracy. However, their explanations tend to be limited to only one explanation style. In this dissertation, we propose a framework comprising new machine learning methods to build explainable models that can make recommendations with multiple explanation-styles, by hybridizing multiple EMF models and by proposing new EMF models that explain recommendations using tags. The various pre-calculated explainability scores, leveraged in our proposed methods, have all been validated in prior work that conducted user studies to evaluate users’ satisfaction with each style individually. Unlike most existing work that generates explanations post-hoc, i.e., after the predictions have already been made, our framework is based on calculating explainability scores directly from available data, before the model is learned, and then using them as part of a regularization mechanism, to guide the model learning. Unlike post-hoc methods, our framework makes it possible to learn machine learning models that take into account the explanation scores, therefore ensuring higher transparency. Our evaluation experiments show that our proposed methods provide accurate recommendations while also providing users with multiple styles of explanations about how data was used to generate each recommendation. Each explanation style also provides additional decision-making information that empowers the user to either trust or scrutinize the recommendations. Although, rooted in the hybrid recommendation framework, our proposed methods make a significant step forward in explainable AI and beyond existing hybrid frameworks, because the proposed hybridization mechanisms make an intentional effort to take into account the individual models’ explanations and not only their output predicted ratings

    A Survey on Cross-domain Recommendation: Taxonomies, Methods, and Future Directions

    Full text link
    Traditional recommendation systems are faced with two long-standing obstacles, namely, data sparsity and cold-start problems, which promote the emergence and development of Cross-Domain Recommendation (CDR). The core idea of CDR is to leverage information collected from other domains to alleviate the two problems in one domain. Over the last decade, many efforts have been engaged for cross-domain recommendation. Recently, with the development of deep learning and neural networks, a large number of methods have emerged. However, there is a limited number of systematic surveys on CDR, especially regarding the latest proposed methods as well as the recommendation scenarios and recommendation tasks they address. In this survey paper, we first proposed a two-level taxonomy of cross-domain recommendation which classifies different recommendation scenarios and recommendation tasks. We then introduce and summarize existing cross-domain recommendation approaches under different recommendation scenarios in a structured manner. We also organize datasets commonly used. We conclude this survey by providing several potential research directions about this field

    A Cross-Domain Recommender System with Kernel-Induced Knowledge Transfer for Overlapping Entities

    Full text link
    © 2012 IEEE. The aim of recommender systems is to automatically identify user preferences within collected data, then use those preferences to make recommendations that help with decisions. However, recommender systems suffer from data sparsity problem, which is particularly prevalent in newly launched systems that have not yet had enough time to amass sufficient data. As a solution, cross-domain recommender systems transfer knowledge from a source domain with relatively rich data to assist recommendations in the target domain. These systems usually assume that the entities either fully overlap or do not overlap at all. In practice, it is more common for the entities in the two domains to partially overlap. Moreover, overlapping entities may have different expressions in each domain. Neglecting these two issues reduces prediction accuracy of cross-domain recommender systems in the target domain. To fully exploit partially overlapping entities and improve the accuracy of predictions, this paper presents a cross-domain recommender system based on kernel-induced knowledge transfer, called KerKT. Domain adaptation is used to adjust the feature spaces of overlapping entities, while diffusion kernel completion is used to correlate the non-overlapping entities between the two domains. With this approach, knowledge is effectively transferred through the overlapping entities, thus alleviating data sparsity issues. Experiments conducted on four data sets, each with three sparsity ratios, show that KerKT has 1.13%-20% better prediction accuracy compared with six benchmarks. In addition, the results indicate that transferring knowledge from the source domain to the target domain is both possible and beneficial with even small overlaps
    • …
    corecore