10,312 research outputs found

    Probability Expressions in AI Decision Support: Impacts on Human+AI Team Performance

    Get PDF
    AI decision support systems aim to assist people in highly complex and consequential domains to make efficient, effective, and high-quality decisions. AI alone cannot be guaranteed to be correct in these complex decision tasks, and a human is often needed to ensure decision accuracy. The ambition is for these human+ AI teams to perform better together than either would individually. To realise this, decision makers must trust their AI partners appropriately, knowing when to rely on their recommendations and when to be sceptical. However, research has shown that decision makers often either mistrust and underutilise these systems, or trust them blindly. Researchers in the fields of HCI and XAI have worked on developing methods that continuously manage an appropriate level of user trust. Despite the probabilistic nature of ML-based AI, little attention has been given to understand how the research area of uncertainty communication might provide solutions to this challenge. This study draws on that research, and asks how different forms of expressing probability in AI decision support systems might affect human+ AI team performance. A series of task-based user tests were conducted to evaluate the use of numerical, verbal, and verbal-numerical probability expressions in communicating AI prediction confidence to decision makers. Results indicated that numerical expressions may be most effective when decision makers use AI decision support. However, findings were inconclusive due to a limited number of participants who used AI decision support during testing

    Directional Multivariate Ranking

    Full text link
    User-provided multi-aspect evaluations manifest users' detailed feedback on the recommended items and enable fine-grained understanding of their preferences. Extensive studies have shown that modeling such data greatly improves the effectiveness and explainability of the recommendations. However, as ranking is essential in recommendation, there is no principled solution yet for collectively generating multiple item rankings over different aspects. In this work, we propose a directional multi-aspect ranking criterion to enable a holistic ranking of items with respect to multiple aspects. Specifically, we view multi-aspect evaluation as an integral effort from a user that forms a vector of his/her preferences over aspects. Our key insight is that the direction of the difference vector between two multi-aspect preference vectors reveals the pairwise order of comparison. Hence, it is necessary for a multi-aspect ranking criterion to preserve the observed directions from such pairwise comparisons. We further derive a complete solution for the multi-aspect ranking problem based on a probabilistic multivariate tensor factorization model. Comprehensive experimental analysis on a large TripAdvisor multi-aspect rating dataset and a Yelp review text dataset confirms the effectiveness of our solution.Comment: Accepted as a full research paper in KDD'2

    Into the Black Box: Designing for Transparency in Artificial Intelligence

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build
    • …
    corecore