1,174 research outputs found

    Exploring the effects of natural language justifications in food recommender systems

    Get PDF
    Users of food recommender systems typically prefer popular recipes, which tend to be unhealthy. To encourage users to select healthier recommendations by making more informed food decisions, we introduce a methodology to generate and present a natural language justification that emphasizes the nutritional content, or health risks and benefits of recommended recipes. We designed a framework that takes a user and two food recommendations as input and produces an automatically generated natural language justification as output, which is based on the user’s characteristics and the recipes’ features. In doing so, we implemented and evaluated eight different justification strategies through two different justification styles (e.g., comparing each recipe’s food features) in an online user study (N = 503). We compared user food choices for two personalized recommendation approaches, popularity-based vs our health-aware algorithm, and evaluated the impact of presenting natural language justifications. We showed that comparative justifications styles are effective in supporting choices for our healthy-aware recommendations, confirming the impact of our methodology on food choices

    Tell Me Why (I Want It That Way) – Effects of Explanations and Online Customer Reviews on Trust in Recommender Systems

    Get PDF
    Review-based recommender systems (RS) have shown great potential in helping users manage information overload and find suitable items. However, a lack of trust still impedes the widespread acceptance of RS. To increase users’ trust, research proposes various methods to generate justifications or explanations. Furthermore, online customer reviews (OCRs) are found to be a trustworthy and reliable source of information. However, it is still unclear how justifications compare to explanations in their influence on users’ trust and whether basing them on OCRs additionally adds trust. Hence, we conduct an online experiment with 531 participants and find that explanations exceed justifications in increasing users’ trust, while basing them on OCRs directly increases users’ intentions to use the system and adopt recommendations without increasing trust in the RS themselves. Unifying different research streams from review-based RS and Explainable Artificial Intelligence, we provide an overarching, holistic view on the conception of justifications and explanations

    Exploring the effects of natural language justifications in food recommender systems

    Get PDF
    Users of food recommender systems typically prefer popular recipes, which tend to be unhealthy. To encourage users to select healthier recommendations by making more informed food decisions, we introduce a methodology to generate and present a natural language justification that emphasizes the nutritional content, or health risks and benefits of recommended recipes. We designed a framework that takes a user and two food recommendations as input and produces an automatically generated natural language justification as output, which is based on the user's characteristics and the recipes' features. In doing so, we implemented and evaluated eight different justification strategies through two different justification styles (e.g., comparing each recipe's food features) in an online user study (N = 503). We compared user food choices for two personalized recommendation approaches, popularity-based vs our health-aware algorithm, and evaluated the impact of presenting natural language justifications. We showed that comparative justifications styles are effective in supporting choices for our healthy-aware recommendations, confirming the impact of our methodology on food choices

    Argument-based generation and explanation of recommendations

    Full text link
    In the recommender systems literature, it has been shown that, in addition to improving system effectiveness, explaining recommendations may increase user satisfaction, trust, persuasion and loyalty. In general, explanations focus on the filtering algorithms or the users and items involved in the generation of recommendations. However, on certain domains that are rich on user-generated textual content, it would be valuable to provide justifications of recommendations according to arguments that are explicit, underlying or related with the data used by the systems, e.g., the reasons for customers' opinions in reviews of e-commerce sites, and the requests and claims in citizens' proposals and debates of e-participation platforms. In this context, there is a need and challenging task to automatically extract and exploit the arguments given for and against evaluated items. We thus advocate to focus not only on user preferences and item features, but also on associated arguments. In other words, we propose to not only consider what is said about items, but also why it is said. Hence, arguments would not only be part of the recommendation explanations, but could also be used by the recommendation algorithms themselves. To this end, in this thesis, we propose to use argument mining techniques and tools that allow retrieving and relating argumentative information from textual content, and investigate recommendation methods that exploit that information before, during and after their filtering processesThe author thanks his supervisor Iván Cantador for his valuable support and guidance in defining this thesis project. The work is supported by the Spanish Ministry of Science and Innovation (PID2019-108965GB-I00
    • …
    corecore