12 research outputs found

    Exploring the effects of natural language justifications in food recommender systems

    Get PDF
    Users of food recommender systems typically prefer popular recipes, which tend to be unhealthy. To encourage users to select healthier recommendations by making more informed food decisions, we introduce a methodology to generate and present a natural language justification that emphasizes the nutritional content, or health risks and benefits of recommended recipes. We designed a framework that takes a user and two food recommendations as input and produces an automatically generated natural language justification as output, which is based on the user’s characteristics and the recipes’ features. In doing so, we implemented and evaluated eight different justification strategies through two different justification styles (e.g., comparing each recipe’s food features) in an online user study (N = 503). We compared user food choices for two personalized recommendation approaches, popularity-based vs our health-aware algorithm, and evaluated the impact of presenting natural language justifications. We showed that comparative justifications styles are effective in supporting choices for our healthy-aware recommendations, confirming the impact of our methodology on food choices

    Mining semantic knowledge graphs to add explainability to black box recommender systems

    Get PDF
    Recommender systems are being increasingly used to predict the preferences of users on online platforms and recommend relevant options that help them cope with information overload. In particular, modern model-based collaborative filtering algorithms, such as latent factor models, are considered state-of-the-art in recommendation systems. Unfortunately, these black box systems lack transparency, as they provide little information about the reasoning behind their predictions. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of big data and machine learning methods to a mass audience without compromising trust. Explanations can take a variety of formats, depending on the recommendation domain and the machine learning model used to make predictions. The objective of this work is to build a recommender system that can generate both accurate predictions and semantically rich explanations that justify the predictions. We propose a novel approach to build an explanation generation mechanism into a latent factor-based black box recommendation model. The designed model is trained to learn to make predictions that are accompanied by explanations that are automatically mined from the semantic web. Our evaluation experiments, which carefully study the trade-offs between the quality of predictions and explanations, show that our proposed approach succeeds in producing explainable predictions without a significant sacrifice in prediction accuracy

    Personalising explainable recommendations: Literature and conceptualisation

    Get PDF
    Explanations in intelligent systems aim to enhance a users’ understandability of their reasoning process and the resulted decisions and recommendations. Explanations typically increase trust, user acceptance and retention. The need for explanations is on the rise due to the increasing public concerns about AI and the emergence of new laws, such as the General Data Protection Regulation (GDPR) in Europe. However, users are different in their needs for explanations, and such needs can depend on their dynamic context. Explanations suffer the risk of being seen as information overload, and this makes personalisation more needed. In this paper, we review literature around personalising explanations in intelligent systems. We synthesise a conceptualisation that puts together various aspects being considered important for the personalisation needs and implementation. Moreover, we identify several challenges which would need more research, including the frequency of explanation and their evolution in tandem with the ongoing user experience

    A treatment recommender clinical decision support system for personalized medicine: method development and proof-of-concept for drug resistant tuberculosis

    Get PDF
    Background Personalized medicine tailors care based on the patient’s or pathogen’s genotypic and phenotypic characteristics. An automated Clinical Decision Support System (CDSS) could help translate the genotypic and phenotypic characteristics into optimal treatment and thus facilitate implementation of individualized treatment by less experienced physicians. Methods We developed a hybrid knowledge- and data-driven treatment recommender CDSS. Stakeholders and experts first define the knowledge base by identifying and quantifying drug and regimen features for the prototype model input. In an iterative manner, feedback from experts is harvested to generate model training datasets, machine learning methods are applied to identify complex relations and patterns in the data, and model performance is assessed by estimating the precision at one, mean reciprocal rank and mean average precision. Once the model performance no longer iteratively increases, a validation dataset is used to assess model overfitting. Results We applied the novel methodology to develop a treatment recommender CDSS for individualized treatment of drug resistant tuberculosis as a proof of concept. Using input from stakeholders and three rounds of expert feedback on a dataset of 355 patients with 129 unique drug resistance profiles, the model had a 95% precision at 1 indicating that the highest ranked treatment regimen was considered appropriate by the experts in 95% of cases. Use of a validation data set however suggested substantial model overfitting, with a reduction in precision at 1 to 78%. Conclusion Our novel and flexible hybrid knowledge- and data-driven treatment recommender CDSS is a first step towards the automation of individualized treatment for personalized medicine. Further research should assess its value in fields other than drug resistant tuberculosis, develop solid statistical approaches to assess model performance, and evaluate their accuracy in real-life clinical settings

    The effects of controllability and explainability in a social recommender system

    Get PDF
    In recent years, researchers in the field of recommender systems have explored a range of advanced interfaces to improve user interactions with recommender systems. Some of the major research ideas explored in this new area include the explainability and controllability of recommendations. Controllability enables end users to participate in the recommendation process by providing various kinds of input. Explainability focuses on making the recommendation process and the reasons behind specific recommendation more clear to the users. While each of these approaches contributes to making traditional “black-box” recommendation more attractive and acceptable to end users, little is known about how these approaches work together. In this paper, we investigate the effects of adding user control and visual explanations in a specific context of an interactive hybrid social recommender system. We present Relevance Tuner+, a hybrid recommender system that allows the users to control the fusion of multiple recommender sources while also offering explanations of both the fusion process and each of the source recommendations. We also report the results of a controlled study (N = 50) that explores the impact of controllability and explainability in this context

    Explainable Neural Attention Recommender Systems

    Get PDF
    Recommender systems, predictive models that provide lists of personalized suggestions, have become increasingly popular in many web-based businesses. By presenting potential items that may interest a user, these systems are able to better monetize and improve users’ satisfaction. In recent years, the most successful approaches rely on capturing what best define users and items in the form of latent vectors, a numeric representation that assumes all instances can be described by their respective affiliation towards a set of hidden features. However, recommendation methods based on latent features still face some realworld limitations. The data sparsity problem originates from the unprecedented variety of available items, making generated suggestions irrelevant to many users. Furthermore, many systems have been recently expected to accompany their suggestions with corresponding reasoning. Users who receive unjustified recommendations they do not agree with are susceptible to stop using the system or ignore its suggestions. In this work we investigate the current trends in the field of recommender systems and focus on two rising areas, deep recommendation and explainable recommender systems. First we present Textual and Contextual Embedding-based Neural Recommender (TCENR), a model that mitigates the data sparsity problem in the area of point-of-interest (POI) recommendation. This method employs different types of deep neural networks to learn varied perspectives of the same user-location interaction, using textual reviews, geographical data and social networks

    Presentation Bias in movie recommendation algorithms

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization Information Analysis and ManagementThe emergence of video on demand (VOD) has transformed the way the content finds its audience. Several improvements have been made on algorithms to provide better movie recommendations to individuals. Given the huge variety of elements that characterize a film (such as casting, genre, soundtrack, amongst others artistic and technical aspects) and that characterize individuals, most of the improvements relied on accomplishing those characteristics to do a better job regarding matching potential clients to each product. However, little attention has been given to evaluate how the algorithms’ result selection are affected by presentation bias. Understanding bias is key to choosing which algorithms will be used by the companies. The existence of a system with presentation bias and feedback loop is already a problem stated by Netflix. In this sense, this research will fill that gap providing a comparative analysis of the bias of the major movie recommendation algorithms

    Explainable Neural Attention Recommender Systems

    Get PDF
    Recommender systems, predictive models that provide lists of personalized suggestions, have become increasingly popular in many web-based businesses. By presenting potential items that may interest a user, these systems are able to better monetize and improve users’ satisfaction. In recent years, the most successful approaches rely on capturing what best define users and items in the form of latent vectors, a numeric representation that assumes all instances can be described by their respective affiliation towards a set of hidden features. However, recommendation methods based on latent features still face some realworld limitations. The data sparsity problem originates from the unprecedented variety of available items, making generated suggestions irrelevant to many users. Furthermore, many systems have been recently expected to accompany their suggestions with corresponding reasoning. Users who receive unjustified recommendations they do not agree with are susceptible to stop using the system or ignore its suggestions. In this work we investigate the current trends in the field of recommender systems and focus on two rising areas, deep recommendation and explainable recommender systems. First we present Textual and Contextual Embedding-based Neural Recommender (TCENR), a model that mitigates the data sparsity problem in the area of point-of-interest (POI) recommendation. This method employs different types of deep neural networks to learn varied perspectives of the same user-location interaction, using textual reviews, geographical data and social networks

    An explainable recommender system based on semantically-aware matrix factorization.

    Get PDF
    Collaborative Filtering techniques provide the ability to handle big and sparse data to predict the ratings for unseen items with high accuracy. Matrix factorization is an accurate collaborative filtering method used to predict user preferences. However, it is a black box system that recommends items to users without being able to explain why. This is due to the type of information these systems use to build models. Although rich in information, user ratings do not adequately satisfy the need for explanation in certain domains. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of big data and machine learning methods to a mass audience without a compromise in trust. Explanations can take a variety of formats, depending on the recommendation domain and the machine learning model used to make predictions. Semantic Web (SW) technologies have been exploited increasingly in recommender systems in recent years. The SW consists of knowledge graphs (KGs) providing valuable information that can help improve the performance of recommender systems. Yet KGs, have not been used to explain recommendations in black box systems. In this dissertation, we exploit the power of the SW to build new explainable recommender systems. We use the SW\u27s rich expressive power of linked data, along with structured information search and understanding tools to explain predictions. More specifically, we take advantage of semantic data to learn a semantically aware latent space of users and items in the matrix factorization model-learning process to build richer, explainable recommendation models. Our off-line and on-line evaluation experiments show that our approach achieves accurate prediction with the additional ability to explain recommendations, in comparison to baseline approaches. By fostering explainability, we hope that our work contributes to more transparent, ethical machine learning without sacrificing accuracy
    corecore