55 research outputs found

    Better supporting workers in ML workplaces

    Get PDF
    This workshop is aimed at bringing together a multidisciplinary group to discuss Machine Learning and its application in the workplace as a practical, everyday work matter. It's our hope this is a step toward helping us design better technology and user experiences to support the accomplishment of that work, while paying attention to workplace context. Despite advancement and investment in Machine Learning (ML) business applications, understanding workers in these work contexts have received little attention. As this category experiences dramatic growth, it's important to better understand the role that workers play, both individually and collaboratively, in a workplace where the output of prediction and machine learning is becoming pervasive. There is a closing window of opportunity to investigate this topic as it proceeds toward ubiquity. CSCW and HCI offer concepts, tools and methodologies to better understand and build for this future

    Human-centred explanation of rule-based decision-making systems in the legal domain

    Full text link
    We propose a human-centred explanation method for rule-based automated decision-making systems in the legal domain. Firstly, we establish a conceptual framework for developing explanation methods, representing its key internal components (content, communication and adaptation) and external dependencies (decision-making system, human recipient and domain). Secondly, we propose an explanation method that uses a graph database to enable question-driven explanations and multimedia display. This way, we can tailor the explanation to the user. Finally, we show how our conceptual framework is applicable to a real-world scenario at the Dutch Tax and Customs Administration and implement our explanation method for this scenario.Comment: This is the full version of a demo at the 36th International Conference on Legal Knowledge and Information Systems (JURIX'23

    Explaining recommendations in an interactive hybrid social recommender

    Get PDF
    Hybrid social recommender systems use social relevance from multiple sources to recommend relevant items or people to users. To make hybrid recommendations more transparent and controllable, several researchers have explored interactive hybrid recommender interfaces, which allow for a user-driven fusion of recommendation sources. In this field of work, the intelligent user interface has been investigated as an approach to increase transparency and improve the user experience. In this paper, we attempt to further promote the transparency of recommendations by augmenting an interactive hybrid recommender interface with several types of explanations. We evaluate user behavior patterns and subjective feedback by a within-subject study (N=33). Results from the evaluation show the effectiveness of the proposed explanation models. The result of post-treatment survey indicates a significant improvement in the perception of explainability, but such improvement comes with a lower degree of perceived controllability

    Intelligibility and user control of context-aware application behaviours

    Get PDF
    Context-aware applications adapt their behaviours according to changes in user context and user requirements. Research and experience have shown that such applications will not always behave the way as users expect. This may lead to loss of users' trust and acceptance of these systems. Hence, context-aware applications should (1) be intelligible (e.g., able to explain to users why it decided to behave in a certain way), and (2) allow users to exploit the revealed information and apply appropriate feedback to control the application behaviours according to their individual preferences to achieve a more desirable outcome. Without appropriate mechanisms for explanations and control of application adaptations, the usability of the applications is limited. This paper describes our on going research and development of a conceptual framework that supports intelligibility of model based context-aware applications and user control of their adaptive behaviours. The goal is to improve usability of context-aware applications

    Assessing Demand for Transparency in Intelligent Systems Using Machine Learning

    Get PDF
    Intelligent systems offering decision support can lessen cognitive load and improve the efficiency of decision making in a variety of contexts. These systems assist users by evaluating multiple courses of action and recommending the right action at the right time. Modern intelligent systems using machine learning introduce new capabilities in decision support, but they can come at a cost. Machine learning models provide little explanation of their outputs or reasoning process, making it difficult to determine when it is appropriate to trust, or if not, what went wrong. In order to improve trust and ensure appropriate reliance on these systems, users must be afforded increased transparency, enabling an understanding of the systems reasoning, and an explanation of its predictions or classifications. Here we discuss the salient factors in designing transparent intelligent systems using machine learning, and present the results of a user-centered design study. We propose design guidelines derived from our study, and discuss next steps for designing for intelligent system transparency
    • …
    corecore