126 research outputs found

    AI UNCERTAINTY IN EXPERT DECISION-MAKING: A QUALITATIVE EVIDENCE SYNTHESIS

    Get PDF
    As Artificial Intelligence (AI) is becoming more prevalent in everyday work settings, the Information Systems (IS) discipline is perfectly poised to study the sociotechnical repercussions of algorithmic decision-making part of expert’s knowledge work. As expert know-how is tacit and socially situated, there are difficulties in capturing the nuances of the emerging human-machine collaborations. In this article, we review how epistemic uncertainty is evident in experts’ decision-making practices when using AI tools. Building on rich primary studies, a qualitative evidence synthesis (QES) approach was used to bring together and analyze relevant literature on this topic. Our findings unravel sources of expert uncertainty, point to strategies experts use to cope with uncertainty, and what attitudes experts have towards uncertainty when making complex decisions with AI

    Explainable recommendations and calibrated trust: two systematic users’ errors

    Get PDF
    The increased adoption of collaborative Human-AI decision-making tools triggered a need to explain the recommendations for safe and effective collaboration. However, evidence from the recent literature showed that current implementation of AI explanations is failing to achieve adequate trust calibration. Such failure has lead decision-makers to either end-up with over-trust, e.g., people follow incorrect recommendations or under-trust, they reject a correct recommendation. In this paper, we explore how users interact with explanations and why trust calibration errors occur. We take clinical decision-support systems as a case study. Our empirical investigation is based on think-aloud protocol and observations, supported by scenarios and decision-making exercise utilizing a set of explainable recommendations interfaces. Our study involved 16 participants from medical domain who use clinical decision support systems frequently. Our findings showed that participants had two systematic errors while interacting with the explanations either by skipping them or misapplying them in their task

    Factors that Influence the Adoption of Human-AI Collaboration in Clinical Decision-Making

    Get PDF
    Recent developments in Artificial Intelligence (AI) have fueled the emergence of human-AI collaboration, a setting where AI is a coequal partner. Especially in clinical decision-making, it has the potential to improve treatment quality by assisting overworked medical professionals. Even though research has started to investigate the utilization of AI for clinical decision-making, its potential benefits do not imply its adoption by medical professionals. While several studies have started to analyze adoption criteria from a technical perspective, research providing a human-centered perspective with a focus on AI\u27s potential for becoming a coequal team member in the decision-making process remains limited. Therefore, in this work, we identify factors for the adoption of human-AI collaboration by conducting a series of semi-structured interviews with experts in the healthcare domain. We identify six relevant adoption factors and highlight existing tensions between them and effective human-AI collaboration

    Knowledge Asymmetry: Are We Destined to Become the Ignorant Lords of Algorithms?

    Get PDF
    Artificial Intelligence (AI) is inextricably linked to knowledge and the management of knowledge. This paper highlights the ethical concerns related to the use of algorithms in the context of hybrid intelligence teams and the dimensions of knowledge asymmetry that exist. The research question motivating our paper is: How is knowledge asymmetry characterised in Human-AI Collaboration eliciting ethical concerns? We first present a brief overview of the literature on knowledge asymmetry and knowledge transfer. We then propose four scenarios of knowledge asymmetry in Human-AI Collaboration, based on real-world cases. Finally, we highlight the ethical concerns linked with each of these scenarios

    FACTORS THAT INFLUENCE THE ADOPTION OF HUMAN-AI COLLABORATION IN CLINICAL DECISION-MAKING

    Get PDF
    Recent developments in Artificial Intelligence (AI) have fueled the emergence of human-AI collaboration, a setting where AI is a coequal partner. Especially in clinical decision-making, it has the potential to improve treatment quality by assisting overworked medical professionals. Even though research has started to investigate the utilization of AI for clinical decision-making, its potential benefits do not imply its adoption by medical professionals. While several studies have started to analyze adoption criteria from a technical perspective, research providing a human-centered perspective with a focus on AI’s potential for becoming a coequal team member in the decision-making process remains limited. Therefore, in this work, we identify factors for the adoption of human-AI collaboration by conducting a series of semi-structured interviews with experts in the healthcare domain. We identify six relevant adoption factors and highlight existing tensions between them and effective human-AI collaboration

    Human-AI Collaboration in Healthcare: A Review and Research Agenda

    Get PDF
    Advances in Artificial Intelligence (AI) have led to the rise of human-AI collaboration. In healthcare, such collaboration could mitigate the shortage of qualified healthcare workers, assist overworked medical professionals, and improve the quality of healthcare. However, many challenges remain, such as investigating biases in clinical decision-making, the lack of trust in AI and adoption issues. While there is a growing number of studies on the topic, they are in disparate fields, and we lack a summary understanding of this research. To address this issue, this study conducts a literature review to examine prior research, identify gaps, and propose future research directions. Our findings indicate that there are limited studies about the evolving and interactive collaboration process in healthcare, the complementarity of humans and AI, the adoption and perception of AI, and the long-term impact on individuals and healthcare organizations. Additionally, more theory-driven research is needed to inform the design, implementation, and use of collaborative AI for healthcare and to realize its benefits
    • 

    corecore