865 research outputs found

    Visualization for Recommendation Explainability: A Survey and New Perspectives

    Full text link
    Providing system-generated explanations for recommendations represents an important step towards transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the last two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation goal, explanation scope, explanation style, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.Comment: Updated version Nov. 2023, 36 page

    An explainable recommender system based on semantically-aware matrix factorization.

    Get PDF
    Collaborative Filtering techniques provide the ability to handle big and sparse data to predict the ratings for unseen items with high accuracy. Matrix factorization is an accurate collaborative filtering method used to predict user preferences. However, it is a black box system that recommends items to users without being able to explain why. This is due to the type of information these systems use to build models. Although rich in information, user ratings do not adequately satisfy the need for explanation in certain domains. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of big data and machine learning methods to a mass audience without a compromise in trust. Explanations can take a variety of formats, depending on the recommendation domain and the machine learning model used to make predictions. Semantic Web (SW) technologies have been exploited increasingly in recommender systems in recent years. The SW consists of knowledge graphs (KGs) providing valuable information that can help improve the performance of recommender systems. Yet KGs, have not been used to explain recommendations in black box systems. In this dissertation, we exploit the power of the SW to build new explainable recommender systems. We use the SW\u27s rich expressive power of linked data, along with structured information search and understanding tools to explain predictions. More specifically, we take advantage of semantic data to learn a semantically aware latent space of users and items in the matrix factorization model-learning process to build richer, explainable recommendation models. Our off-line and on-line evaluation experiments show that our approach achieves accurate prediction with the additional ability to explain recommendations, in comparison to baseline approaches. By fostering explainability, we hope that our work contributes to more transparent, ethical machine learning without sacrificing accuracy

    Accurate and justifiable : new algorithms for explainable recommendations.

    Get PDF
    Websites and online services thrive with large amounts of online information, products, and choices, that are available but exceedingly difficult to find and discover. This has prompted two major paradigms to help sift through information: information retrieval and recommender systems. The broad family of information retrieval techniques has given rise to the modern search engines which return relevant results, following a user\u27s explicit query. The broad family of recommender systems, on the other hand, works in a more subtle manner, and do not require an explicit query to provide relevant results. Collaborative Filtering (CF) recommender systems are based on algorithms that provide suggestions to users, based on what they like and what other similar users like. Their strength lies in their ability to make serendipitous, social recommendations about what books to read, songs to listen to, movies to watch, courses to take, or generally any type of item to consume. Their strength is also that they can recommend items of any type or content because their focus is on modeling the preferences of the users rather than the content of the recommended items. Although recommender systems have made great strides over the last two decades, with significant algorithmic advances that have made them increasingly accurate in their predictions, they suffer from a few notorious weaknesses. These include the cold-start problem when new items or new users enter the system, and lack of interpretability and explainability in the case of powerful black-box predictors, such as the Singular Value Decomposition (SVD) family of recommenders, including, in particular, the popular Matrix Factorization (MF) techniques. Also, the absence of any explanations to justify their predictions can reduce the transparency of recommender systems and thus adversely impact the user\u27s trust in them. In this work, we propose machine learning approaches for multi-domain Matrix Factorization (MF) recommender systems that can overcome the new user cold-start problem. We also propose new algorithms to generate explainable recommendations, using two state of the art models: Matrix Factorization (MF) and Restricted Boltzmann Machines (RBM). Our experiments, which were based on rigorous cross-validation on the MovieLens benchmark data set and on real user tests, confirmed that our proposed methods succeed in generating explainable recommendations without a major sacrifice in accuracy

    USER CONTROLLABILITY IN A HYBRID RECOMMENDER SYSTEM

    Get PDF
    Since the introduction of Tapestry in 1990, research on recommender systems has traditionally focused on the development of algorithms whose goal is to increase the accuracy of predicting users’ taste based on historical data. In the last decade, this research has diversified, with human factors being one area that has received increased attention. Users’ characteristics, such as trusting propensity and interest in a domain, or systems’ characteristics, such as explainability and transparency, have been shown to have an effect on improving the user experience with a recommender. This dissertation investigates on the role of controllability and user characteristics upon the engagement and experience of users of a hybrid recommender system. A hybrid recommender is a system that integrates the results of different algorithms to produce a single set of recommendations. This research examines whether allowing the user to control the process of fusing or integrating different algorithms (i.e., different sources of relevance) results in increased engagement and a better user experience. The essential contribution of this dissertation is an extensive study of controllability in a hybrid fusion scenario. In particular, the introduction of an interactive Venn diagram visualization, combined with sliders explored in a previous work, can provide an efficient visual paradigm for information filtering with a hybrid recommender that fuses different prospects of relevance with overlapping recommended items. This dissertation also provides a three-fold evaluation of the user experience: objective metrics, subjective user perception, and behavioral measures

    NEXT LEVEL: A COURSE RECOMMENDER SYSTEM BASED ON CAREER INTERESTS

    Get PDF
    Skills-based hiring is a talent management approach that empowers employers to align recruitment around business results, rather than around credentials and title. It starts with employers identifying the particular skills required for a role, and then screening and evaluating candidates’ competencies against those requirements. With the recent rise in employers adopting skills-based hiring practices, it has become integral for students to take courses that improve their marketability and support their long-term career success. A 2017 survey of over 32,000 students at 43 randomly selected institutions found that only 34% of students believe they will graduate with the skills and knowledge required to be successful in the job market. Furthermore, the study found that while 96% of chief academic officers believe that their institutions are very or somewhat effective at preparing students for the workforce, only 11% of business leaders strongly agree [11]. An implication of the misalignment is that college graduates lack the skills that companies need and value. Fortunately, the rise of skills-based hiring provides an opportunity for universities and students to establish and follow clearer classroom-to-career pathways. To this end, this paper presents a course recommender system that aims to improve students’ career readiness by suggesting relevant skills and courses based on their unique career interests

    Understanding the Role of Interactivity and Explanation in Adaptive Experiences

    Get PDF
    Adaptive experiences have been an active area of research in the past few decades, accompanied by advances in technology such as machine learning and artificial intelligence. Whether the currently ongoing research on adaptive experiences has focused on personalization algorithms, explainability, user engagement, or privacy and security, there is growing interest and resources in developing and improving these research focuses. Even though the research on adaptive experiences has been dynamic and rapidly evolving, achieving a high level of user engagement in adaptive experiences remains a challenge. %????? This dissertation aims to uncover ways to engage users in adaptive experiences by incorporating interactivity and explanation through four studies. Study I takes the first step to link the explanation and interactivity in machine learning systems to facilitate users\u27 engagement with the underlying machine learning model with the Tic-Tac-Toe game as a use case. The results show that explainable machine learning (XML) systems (and arguably XAI systems in general) indeed benefit from mechanisms that allow users to interact with the system\u27s internal decision rules. Study II, III, and IV further focus on adaptive experiences in recommender systems in specific, exploring the role of interactivity and explanation to keep the user “in-the-loop” in recommender systems, trying to mitigate the ``filter bubble\u27\u27 problem and help users in self-actualizing by supporting them in exploring and understanding their unique tastes. Study II investigates the effect of recommendation source (a human expert vs. an AI algorithm) and justification method (needs-based vs. interest-based justification) on professional development recommendations in a scenario-based study setting. The results show an interaction effect between these two system aspects: users who are told that the recommendations are based on their interests have a better experience when the recommendations are presented as originating from an AI algorithm, while users who are told that the recommendations are based on their needs have a better experience when the recommendations are presented as originating from a human expert. This work implies that while building the proposed novel movie recommender system covered in study IV, it would provide a better user experience if the movie recommendations are presented as originating from algorithms rather than from a human expert considering that movie preferences (which will be visualized by the movies\u27 emotion feature) are usually based on users\u27 interest. Study III explores the effects of four novel alternative recommendation lists on participants’ perceptions of recommendations and their satisfaction with the system. The four novel alternative recommendation lists (RSSA features) which have the potential to go beyond the traditional top N recommendations provide transparency from a different level --- how much else does the system learn about users beyond the traditional top N recommendations, which in turn enable users to interact with these alternative lists by rating the initial recommendations so as to correct or confirm the system\u27s estimates of the alternative recommendations. The subjective evaluation and behavioral analysis demonstrate that the proposed RSSA features had a significant effect on the user experience, surprisingly, two of the four RSSA features (the controversial and hate features) perform worse than the traditional top-N recommendations on the measured subjective dependent variables while the other two RSSA features (the hipster and no clue items) perform equally well and even slightly better than the traditional top-N (but this effect is not statistically significant). Moreover, the results indicate that individual differences, such as the need for novelty and domain knowledge, play a significant role in users’ perception of and interaction with the system. Study IV further combines diversification, visualization, and interactivity, aiming to encourage users to be more engaged with the system. The results show that introducing emotion as an item feature into recommender systems does help in personalization and individual taste exploration; these benefits are greatly optimized through the mechanisms that diversify recommendations by emotional signature, visualize recommendations on the emotional signature, and allow users to directly interact with the system by tweaking their tastes, which further contributes to both user experience and self-actualization. This work has practical implications for designing adaptive experiences. Explanation solutions in adaptive experiences might not always lead to a positive user experience, it highly depends on the application domain and the context (as studied in all four studies); it is essential to carefully investigate a specific explanation solution in combination with other design elements in different fields. Introducing control by allowing for direct interactivity (vs. indirect interactivity) in adaptive systems and providing feedback to users\u27 input by integrating their input into the algorithms would create a more engaging and interactive user experience (as studied in Study I and IV). And cumulatively, appropriate direct interaction with the system along with deliberate and thoughtful designs of explanation (including visualization design with the application environment fully considered), which are able to arouse user reflection or resonance, would potentially promote both user experience and user self-actualization

    Explanations in Music Recommender Systems in a Mobile Setting

    Get PDF
    Revised version: some spelling errors corrected.Every day, millions of users utilize their mobile phones to access music streaming services such as Spotify. However, these `black boxes’ seldom provide adequate explanations for their music recommendations. A systematic literature review revealed that there is a strong relationship between moods and music, and that explanations and interface design choices can effect how people perceive recommendations just as much as algorithm accuracy. However, little seems to be known about how to apply user-centric design approaches, which exploit affective information to present explanations, to mobile devices. In order to bridge these gaps, the work of Andjelkovic, Parra, & O’Donovan (2019) was extended upon and applied as non-interactive designs in a mobile setting. Three separate Amazon Mechanical Turk studies asked participants to compare the same three interface designs: baseline, textual, and visual (n=178). Each survey displayed a different playlist with either low, medium, or high music popularity. Results indicate that music familiarity may or may not influence the need for explanations, but explanations are important to users. Both explanatory designs fared equally better than the baseline, and the use of affective information may help systems become more efficient, transparent, trustworthy, and satisfactory. Overall, there does not seem to be a `one design fits all’ solution for explanations in a mobile setting.Master's Thesis in Information ScienceINFO390MASV-INFOMASV-IK

    Explorative Analysis of Recommendations Through Interactive Visualization

    Get PDF
    Even though today's recommender algorithms are highly sophisticated, they can hardly take into account the users' situational needs. An obvious way to address this is to initially inquire the users' momentary preferences, but the users' inability to accurately state them upfront may lead to the loss of several good alternatives. Hence, this paper suggests to generate the recommendations without such additional input data from the users and let them interactively explore the recommended items on their own. To support this explorative analysis, a novel visualization tool based on treemaps is developed. The analysis of the prototype demonstrates that the interactive treemap visualization facilitates the users' comprehension of the big picture of available alternatives and the reasoning behind the recommendations. This helps the users get clear about their situational needs, inspect the most relevant recommendations in detail, and finally arrive at informed decisions
    • …
    corecore