5,112 research outputs found

    Visualization for Recommendation Explainability: A Survey and New Perspectives

    Full text link
    Providing system-generated explanations for recommendations represents an important step towards transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the last two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation goal, explanation scope, explanation style, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.Comment: Updated version Nov. 2023, 36 page

    May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability

    Full text link
    Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users' comprehension of static explanations, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. Participants are presented with static explanations, followed by a conversation with a human expert regarding the explanations. We measure the effect of the conversation on participants' ability to choose, from three machine learning models, the most accurate one based on explanations and their self-reported comprehension, acceptance, and trust. Empirical results show that conversations significantly improve comprehension, acceptance, trust, and collaboration. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations

    The Explainable Business Process (XBP) - An Exploratory Research

    Get PDF
    Providing explanations to the business process, its decisions and its activities, is an important key factor for the process in order to achieve the business objectives of the business process, and to minimize and deal with the ambiguity of the business process that causes multiple interpretations, as well as to engender the appropriate trust of the users in the process. As a first step towards adding explanations to business process, we present an exploratory study to bring in the concept of explainability into business process, where we propose a conceptual framework to use the explainability with business process in a model that we called the Explainable Business Process XBP, furthermore we propose the XBP lifecycle based on the Model-based and Incremental Knowledge Engineering (MIKE) approach, in order to show in details the phase where explainability can take a place in business process lifecycle, noting that we focus on explaining the decisions and activities of the process in its as-is model without transforming it into a to-be model

    ChatrEx: Designing explainable chatbot interfaces for enhancing usefulness, transparency, and trust

    Get PDF
    When breakdowns occur during a human-chatbot conversation, the lack of transparency and the “black-box” nature of task-oriented chatbots can make it difficult for end users to understand what went wrong and why. Inspired by recent HCI research on explainable AI solutions, we explored the design space of explainable chatbot interfaces through ChatrEx. We followed the iterative design and prototyping approach and designed two novel in-application chatbot interfaces (ChatrEx-VINC and ChatrEx-VST) that provide visual example-based step-by-step explanations about the underlying working of a chatbot during a breakdown. ChatrEx-VINC provides visual example-based step-by-step explanations in-context of the chat window whereas ChatrEx-VST provides explanations as a visual tour overlaid on the application interface. Our formative study with 11 participants elicited informal user feedback to help us iterate on our design ideas at each of the design and ideation phases and we implemented our final designs as web-based interactive chatbots for complex spreadsheet tasks. We conducted an observational study with 14 participants to compare our designs with current state-of-the-art chatbot interfaces and assessed their strengths and weaknesses. We found that visual explanations in both ChatrEx-VINC and ChatrEx-VST enhanced users’ understanding of the reasons for a conversational breakdown and improved users\u27 perceptions of usefulness, transparency, and trust. We identify several opportunities for future HCI research to exploit explainable chatbot interfaces and better support human-chatbot interaction

    Adaptive model-driven user interface development systems

    Get PDF
    Adaptive user interfaces (UIs) were introduced to address some of the usability problems that plague many software applications. Model-driven engineering formed the basis for most of the systems targeting the development of such UIs. An overview of these systems is presented and a set of criteria is established to evaluate the strengths and shortcomings of the state-of-the-art, which is categorized under architectures, techniques, and tools. A summary of the evaluation is presented in tables that visually illustrate the fulfillment of each criterion by each system. The evaluation identified several gaps in the existing art and highlighted the areas of promising improvement

    Using technology to encourage a healthier lifestyle in people with Down's syndrome

    Get PDF
    This article reports on the development of a mobile app developed to Encourage Healthier Lifestyles, with emphasis on food intake, by People with Down’s syndrome. The system started by considering generic guidelines on designing technology for people with Down’s syndrome investigated by a previous European project. Then it developed the product using the User-centred Intelligent Environments Development Process, an iterative method of gathering stakeholders’ views to involve them in co-designing the product. The project produced a mobile app which was validated with the intended final users and gathered positive feedback. The experience also provides further insights which can inform developers of future similar technological solutions
    • …
    corecore