1,973 research outputs found

    Layered evaluation of interactive adaptive systems : framework and formative methods

    Get PDF
    Peer reviewedPostprin

    Explanations in Music Recommender Systems in a Mobile Setting

    Get PDF
    Revised version: some spelling errors corrected.Every day, millions of users utilize their mobile phones to access music streaming services such as Spotify. However, these `black boxes’ seldom provide adequate explanations for their music recommendations. A systematic literature review revealed that there is a strong relationship between moods and music, and that explanations and interface design choices can effect how people perceive recommendations just as much as algorithm accuracy. However, little seems to be known about how to apply user-centric design approaches, which exploit affective information to present explanations, to mobile devices. In order to bridge these gaps, the work of Andjelkovic, Parra, & O’Donovan (2019) was extended upon and applied as non-interactive designs in a mobile setting. Three separate Amazon Mechanical Turk studies asked participants to compare the same three interface designs: baseline, textual, and visual (n=178). Each survey displayed a different playlist with either low, medium, or high music popularity. Results indicate that music familiarity may or may not influence the need for explanations, but explanations are important to users. Both explanatory designs fared equally better than the baseline, and the use of affective information may help systems become more efficient, transparent, trustworthy, and satisfactory. Overall, there does not seem to be a `one design fits all’ solution for explanations in a mobile setting.Master's Thesis in Information ScienceINFO390MASV-INFOMASV-IK

    Visualization for Recommendation Explainability: A Survey and New Perspectives

    Full text link
    Providing system-generated explanations for recommendations represents an important step towards transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the last two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation goal, explanation scope, explanation style, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.Comment: Updated version Nov. 2023, 36 page

    Controllability and explainability in a hybrid social recommender system

    Get PDF
    The growth in artificial intelligence (AI) technology has advanced many human-facing applications. The recommender system is one of the promising sub-domain of AI-driven application, which aims to predict items or ratings based on user preferences. These systems were empowered by large-scale data and automated inference methods that bring useful but puzzling suggestions to the users. That is, the output is usually unpredictable and opaque, which may demonstrate user perceptions of the system that can be confusing, frustrating or even dangerous in many life-changing scenarios. Adding controllability and explainability are two promising approaches to improve human interaction with AI. However, the varying capability of AI-driven applications makes the conventional design principles are less useful. It brings tremendous opportunities as well as challenges for the user interface and interaction design, which has been discussed in the human-computer interaction (HCI) community for over two decades. The goal of this dissertation is to build a framework for AI-driven applications that enables people to interact effectively with the system as well as be able to interpret the output from the system. Specifically, this dissertation presents the exploration of how to bring controllability and explainability to a hybrid social recommender system, included several attempts in designing user-controllable and explainable interfaces that allow the users to fuse multi-dimensional relevance and request explanations of the received recommendations. The works contribute to the HCI fields by providing design implications of enhancing human-AI interaction and gaining transparency of AI-driven applications

    Making Filter Bubbles Understandable

    Get PDF
    Recommender systems tend to create filter bubbles and, as a consequence, lower diversity exposure, often with the user not being aware of it. The biased preselection of content by recommender systems has called for approaches to deal with exposure diversity, such as giving users control over their filter bubble. We analyze how to make filter bubbles understandable and controllable by using interactive word clouds, following the idea of building trust in the system. On the basis of several prototypes, we performed explorative research on how to design word clouds for the controllability of filter bubbles. Our findings can inform designers of interactive filter bubbles in personalized offers of broadcasters, publishers, and media houses
    • …
    corecore