2,752 research outputs found

    A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

    Get PDF
    Artificial Intelligence (AI) has seen a surge in popularity as increased computing power has made it more viable and useful. The increasing complexity of AI, however, leads to can lead to difficulty in understanding or interpreting the results of AI procedures, which can then lead to incorrect predictions, classifications, or analysis of outcomes. The result of these problems can be over-reliance on AI, under-reliance on AI, or simply confusion as to what the results mean. Additionally, the complexity of AI models can obscure the algorithmic, data and design biases to which all models are subject, which may exacerbate negative outcomes, particularly with respect to minority populations. Explainable AI (XAI) aims to mitigate these problems by providing information on the intent, performance, and reasoning process of the AI. Where time or cognitive resources are limited, the burden of additional information can negatively impact performance. Ensuring XAI information is intuitive and relevant allows the user to quickly calibrate their trust in the AI, in turn improving trust in suggested task alternatives, reducing workload and improving task performance. This study details a structured approach to the development of XAI in time-critical systems based on a design thinking framework that preserves the agile, fast-iterative approach characteristic of design thinking and augments it with practical tools and guides. The framework establishes a focus on shared situational perspective, and the deep understanding of both users and the AI in the empathy phase, provides a model with seven XAI levels and corresponding solution themes, and defines objective, physiological metrics for concurrent assessment of trust and workload

    Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn

    Get PDF
    Owing to advancements in artificial intelligence (AI) and specifically in machine learning, information technology (IT) systems can support humans in an increasing number of tasks. Yet, previous research indicates that people often prefer human support to support by an IT system, even if the latter provides superior performance – a phenomenon called algorithm aversion. A possible cause of algorithm aversion put forward in literature is that users lose trust in IT systems they become familiar with and perceive to err, for example, making forecasts that turn out to deviate from the actual value. Therefore, this paper evaluates the effectiveness of demonstrating an AI-based system’s ability to learn as a potential countermeasure against algorithm aversion in an incentive-compatible online experiment. The experiment reveals how the nature of an erring advisor (i.e., human vs. algorithmic), its familiarity to the user (i.e., unfamiliar vs. familiar), and its ability to learn (i.e., non-learning vs. learning) influence a decision maker’s reliance on the advisor’s judgement for an objective and non-personal decision task. The results reveal no difference in the reliance on unfamiliar human and algorithmic advisors, but differences in the reliance on familiar human and algorithmic advisors that err. Demonstrating an advisor’s ability to learn, however, offsets the effect of familiarity. Therefore, this study contributes to an enhanced understanding of algorithm aversion and is one of the first to examine how users perceive whether an IT system is able to learn. The findings provide theoretical and practical implications for the employment and design of AI-based systems

    The INTERPRET Decision-Support System version 3.0 for evaluation of Magnetic Resonance Spectroscopy data from human brain tumours and other abnormal brain masses.

    Get PDF
    Background Proton Magnetic Resonance (MR) Spectroscopy (MRS) is a widely available technique for those clinical centres equipped with MR scanners. Unlike the rest of MR-based techniques, MRS yields not images but spectra of metabolites in the tissues. In pathological situations, the MRS profile changes and this has been particularly described for brain tumours. However, radiologists are frequently not familiar to the interpretation of MRS data and for this reason, the usefulness of decision-support systems (DSS) in MRS data analysis has been explored. Results This work presents the INTERPRET DSS version 3.0, analysing the improvements made from its first release in 2002. Version 3.0 is aimed to be a program that 1st, can be easily used with any new case from any MR scanner manufacturer and 2nd, improves the initial analysis capabilities of the first version. The main improvements are an embedded database, user accounts, more diagnostic discrimination capabilities and the possibility to analyse data acquired under additional data acquisition conditions. Other improvements include a customisable graphical user interface (GUI). Most diagnostic problems included have been addressed through a pattern-recognition based approach, in which classifiers based on linear discriminant analysis (LDA) were trained and tested. Conclusions The INTERPRET DSS 3.0 allows radiologists, medical physicists, biochemists or, generally speaking, any person with a minimum knowledge of what an MR spectrum is, to enter their own SV raw data, acquired at 1.5 T, and to analyse them. The system is expected to help in the categorisation of MR Spectra from abnormal brain masses

    Staying in work : thinking about a new policy agenda

    Get PDF

    Leadership among Directors of Social Services at Rehabilitative Healthcare Chains

    Get PDF
    Rehabilitation and healthcare centers (RHCs) provide ongoing care to the elderly and chronically ill. To maximize the quality of this care, RHC staff must be properly trained to respond to patient care crises and communicate across departments. Although researchers have studied the leadership styles, strategies, and interactions of facility administrators and nursing directors, there is a substantial gap in the literature on the leadership styles and strategies employed by directors of social services (DSSs). The aim of this phenomenological study was to address this gap in the research by exploring how DSSs influenced leadership policies, prepared subordinates for crisis intervention and management, perceived that social workers influenced decision-making in patient care, and believed that communication amongst RHC staff about patient care could be improved. The conceptual framework for this study was based on 3 leadership model constructs: the multilevel leadership model construct, the situational leadership model construct, and the complex adaptive leadership model construct. Participants included a purposive sample of 10 DSSs working in large, corporate RHCs in Virginia. Data were collected via in-person, semistructured interviews consisting of open-ended questions. Data were analyzed via Hycner\u27s phenomenological approach. Findings from this investigation helped clarify roles and responsibilities of DSSs, thereby improving the leadership they provide to subordinate social workers. Findings may be used to improve communication across professionals within RHCs and emphasize the important role that social workers should play in patient care decisions
    corecore