11 research outputs found

    Rationalising decision-making about risk: a normative approach.

    Get PDF
    Techniques for determining and applying security decisions typically follow risk-based analytical approaches where alternative options are put forward and weighed in accordance to risk severity metrics based on goals and context. The reasoning or validity behind decision making can, however, prove difficult to determine in conditions characterised by uncertainty stemming from environments with insufficient or incoherent information. This paper approaches the problem by proposing a conceptual model that provides security decision making traceability through auditing decision makers' rationalisation of risk. Additionally, the model highlights the role metacognition plays in identifying and understanding information affordances used for decision making

    SmartEx: A Framework for Generating User-Centric Explanations in Smart Environments

    Full text link
    Explainability is crucial for complex systems like pervasive smart environments, as they collect and analyze data from various sensors, follow multiple rules, and control different devices resulting in behavior that is not trivial and, thus, should be explained to the users. The current approaches, however, offer flat, static, and algorithm-focused explanations. User-centric explanations, on the other hand, consider the recipient and context, providing personalized and context-aware explanations. To address this gap, we propose an approach to incorporate user-centric explanations into smart environments. We introduce a conceptual model and a reference architecture for characterizing and generating such explanations. Our work is the first technical solution for generating context-aware and granular explanations in smart environments. Our architecture implementation demonstrates the feasibility of our approach through various scenarios.Comment: 22nd International Conference on Pervasive Computing and Communications (PerCom 2024

    A Normative Decision Making Model for Cyber Security

    Get PDF
    Purpose - The purpose of this paper was to investigate security decision making during risk and uncertain conditions and to propose a normative model capable of tracing the decision rationale. Design/methodology/approach – The proposed risk rationalisation model is grounded in literature and studies on security analysts’ activities. The model design was inspired by established awareness models including Situation Awareness and Observe Orient Decide Act (OODA). Model validated was conducted using cognitive walkthroughs with security analysts. Findings – The results indicate that the model may adequately be used to elicit the rationale or provide traceability for security decision making. The results also illustrate how the model may be applied to facilitate design for security decision makers. Research limitations/implications – The proof of concept is based on a hypothetical risk scenario. Further studies could investigate the model’s application in actual scenarios. Originality/value – The paper proposes a novel approach to tracing the rationale behind security decision making during risk and uncertain conditions. The research also illustrates techniques for adapting decision making models to inform system design

    Influence of context on users’ views about explanations for decision-tree predictions

    Get PDF
    This research was supported in part by grant DP190100006 from the Australian Research Council. Ethics approval for the user studies was obtained from Monash University Human Research Ethics Committee (ID-24208). We thank Marko Bohanec, one of the creators of the Nursery dataset, for helping us understand the features and their values. We are also grateful to the anonymous reviewers for their helpful comments.Peer reviewedPostprin

    Explainable AI: roles and stakeholders, desirements and challenges

    Get PDF
    IntroductionThe purpose of the Stakeholder Playbook is to enable the developers of explainable AI systems to take into account the different ways in which different stakeholders or role-holders need to “look inside” the AI/XAI systems.MethodWe conducted structured cognitive interviews with senior and mid-career professionals who had direct experience either developing or using AI and/or autonomous systems.ResultsThe results show that role-holders need access to others (e.g., trusted engineers and trusted vendors) for them to be able to develop satisfying mental models of AI systems. They need to know how it fails and misleads as much as they need to know how it works. Some stakeholders need to develop an understanding that enables them to explain the AI to someone else and not just satisfy their own sense-making requirements. Only about half of our interviewees said they always wanted explanations or even needed better explanations than the ones that were provided. Based on our empirical evidence, we created a “Playbook” that lists explanation desires, explanation challenges, and explanation cautions for a variety of stakeholder groups and roles.DiscussionThis and other findings seem surprising, if not paradoxical, but they can be resolved by acknowledging that different role-holders have differing skill sets and have different sense-making desires. Individuals often serve in multiple roles and, therefore, can have different immediate goals. The goal of the Playbook is to help XAI developers by guiding the development process and creating explanations that support the different roles

    Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare

    Get PDF
    Challenges in interpreting some high-performing models present complications in applying machine learning (ML) techniques to healthcare problems. Recently, there has been rapid growth in research on model interpretability; however, approaches to explaining complex ML models are rarely informed by end-user needs and user evaluations of model interpretability are lacking, especially in healthcare. This makes it challenging to determine what explanation approaches might enable providers to understand model predictions in a comprehensible and useful way. Therefore, I aimed to utilize clinician perspectives to inform the design of explanations for ML-based prediction tools and improve the adoption of these systems in practice. In this dissertation, I proposed a new theoretical framework for designing user-centered explanations for ML-based systems. I then utilized the framework to propose explanation designs for predictions from a pediatric in-hospital mortality risk model. I conducted focus groups with healthcare providers to obtain feedback on the proposed designs, which was used to inform the design of a user-centered explanation. The user-centered explanation was evaluated in a laboratory study to assess its effect on healthcare provider perceptions of the model and decision-making processes. The results demonstrated that the user-centered explanation design improved provider perceptions of utilizing the predictive model in practice, but exhibited no significant effect on provider accuracy, confidence, or efficiency in making decisions. Limitations of the evaluation study design, including a small sample size, may have affected the ability to detect an impact on decision-making. Nonetheless, the predictive model with the user-centered explanation was positively received by healthcare providers, and demonstrated a viable approach to explaining ML model predictions in healthcare. Future work is required to address the limitations of this study and further explore the potential benefits of user-centered explanation designs for predictive models in healthcare. This work contributes a new theoretical framework for user-centered explanation design for ML-based systems that is generalizable outside the domain of healthcare. Moreover, the work provides meaningful insights into the role of model interpretability and explanation in healthcare while advancing the discussion on how to effectively communicate ML model information to healthcare providers

    A Multidisciplinary Design and Evaluation Framework for Explainable AI Systems

    Get PDF
    Nowadays, algorithms analyze user data and affect the decision-making process for millions of people on matters like employment, insurance and loan rates, and even criminal justice. However, these algorithms that serve critical roles in many industries have their own biases that can result in discrimination and unfair decision-making. Explainable Artificial Intelligence (XAI) systems can be a solution to predictable and accountable AI by explaining AI decision-making processes for end users and therefore increase user awareness and prevent bias and discrimination. The broad spectrum of research on XAI, including designing interpretable models, explainable user interfaces, and human-subject studies of XAI systems are sought in different disciplines such as machine learning, human-computer interactions (HCI), and visual analytics. The mismatch in objectives for the scholars to define, design, and evaluate the concept of XAI may slow down the overall advances of end-to-end XAI systems. My research aims to converge knowledge behind design and evaluation of XAI systems between multiple disciplines to further support key benefits of algorithmic transparency and interpretability. To this end, I propose a comprehensive design and evaluation framework for XAI systems with step-by-step guidelines to pair different design goals with their evaluation methods for iterative system design cycles in multidisciplinary teams. This dissertation presents a comprehensive XAI design and evaluation framework to provide guidance for different design goals and evaluation approaches in XAI systems. After a thorough review of XAI research in the fields of machine learning, visualization, and HCI, I present a categorization of XAI design goals and evaluation methods and show a mapping between design goals for different XAI user groups and their evaluation methods. From my findings, I present a design and evaluation framework for XAI systems (Objective 1) to address the relation between different system design needs. The framework provides recommendations for different goals and ready-to-use tables of evaluation methods for XAI systems. The importance of this framework is in providing guidance for researchers on different aspects of XAI system design in multidisciplinary team efforts. Then, I demonstrate and validate the proposed framework (Objective 2) through one end-to-end XAI system case study and two examples by analysis of previous XAI systems in terms of our framework. I present two contributions to my XAI design and evaluation framework to improve evaluation methods for XAI system
    corecore