17 research outputs found

    Exceeding the Ordinary: A Framework for Examining Teams Across the Extremeness Continuum and Its Impact on Future Research

    Full text link
    Work teams increasingly face unprecedented challenges in volatile, uncertain, complex, and often ambiguous environments. In response, team researchers have begun to focus more on teams whose work revolves around mitigating risks in these dynamic environments. Some highly insightful contributions to team research and organizational studies have originated from investigating teams that face unconventional or extreme events. Despite this increased attention to extreme teams, however, a comprehensive theoretical framework is missing. We introduce such a framework that envisions team extremeness as a continuous, multidimensional variable consisting of environmental extremeness (i.e., external team context) and task extremeness (i.e., internal team context). The proposed framework allows every team to be placed on the team extremeness continuum, bridging the gap between literature on extreme and more traditional teams. Furthermore, we present six propositions addressing how team extremeness may interact with team processes, emergent states, and outcomes using core variables for team effectiveness and the well-established input-mediator-output-input model to structure our theorizing. Finally, we outline some potential directions for future research by elaborating on temporal considerations (i.e., patterns and trajectories), measurement approaches, and consideration of multilevel relationships involving team extremeness. We hope that our theoretical framework and theorizing can create a path forward, stimulating future research within the organizational team literature to further examine the impact of team extremeness on team dynamics and effectiveness

    Human-AI teaming: leveraging transactive memory and speaking up for enhanced team effectiveness

    Full text link
    In this prospective observational study, we investigate the role of transactive memory and speaking up in human-AI teams comprising 180 intensive care (ICU) physicians and nurses working with AI in a simulated clinical environment. Our findings indicate that interactions with AI agents differ significantly from human interactions, as accessing information from AI agents is positively linked to a team’s ability to generate novel hypotheses and demonstrate speaking-up behavior, but only in higher-performing teams. Conversely, accessing information from human team members is negatively associated with these aspects, regardless of team performance. This study is a valuable contribution to the expanding field of research on human-AI teams and team science in general, as it emphasizes the necessity of incorporating AI agents as knowledge sources in a team’s transactive memory system, as well as highlighting their role as catalysts for speaking up. Practical implications include suggestions for the design of future AI systems and human-AI team training in healthcare and beyond

    Solving the Explainable AI Conundrum: How to Bridge the Gap Between Clinicians Needs and Developers Goals

    Full text link
    Explainable AI (XAI) is considered the number one solution for overcoming implementation hurdles of AI/ML in clinical practice. However, it is still unclear how clinicians and developers interpret XAI (differently) and whether building such systems is achievable or even desirable. This longitudinal multi-method study queries (n=112) clinicians and developers as they co-developed the DCIP – an ML-based prediction system for Delayed Cerebral Ischemia. The resulting framework reveals that ambidexterity between exploration and exploitation can help bridge opposing goals and requirements to improve the design and implementation of AI/ML in healthcare

    Effects of interacting with a large language model compared with a human coach on the clinical diagnostic process and outcomes among fourth-year medical students: study protocol for a prospective, randomised experiment using patient vignettes.

    Get PDF
    INTRODUCTION Versatile large language models (LLMs) have the potential to augment diagnostic decision-making by assisting diagnosticians, thanks to their ability to engage in open-ended, natural conversations and their comprehensive knowledge access. Yet the novelty of LLMs in diagnostic decision-making introduces uncertainties regarding their impact. Clinicians unfamiliar with the use of LLMs in their professional context may rely on general attitudes towards LLMs more broadly, potentially hindering thoughtful use and critical evaluation of their input, leading to either over-reliance and lack of critical thinking or an unwillingness to use LLMs as diagnostic aids. To address these concerns, this study examines the influence on the diagnostic process and outcomes of interacting with an LLM compared with a human coach, and of prior training vs no training for interacting with either of these 'coaches'. Our findings aim to illuminate the potential benefits and risks of employing artificial intelligence (AI) in diagnostic decision-making. METHODS AND ANALYSIS We are conducting a prospective, randomised experiment with N=158 fourth-year medical students from Charité Medical School, Berlin, Germany. Participants are asked to diagnose patient vignettes after being assigned to either a human coach or ChatGPT and after either training or no training (both between-subject factors). We are specifically collecting data on the effects of using either of these 'coaches' and of additional training on information search, number of hypotheses entertained, diagnostic accuracy and confidence. Statistical methods will include linear mixed effects models. Exploratory analyses of the interaction patterns and attitudes towards AI will also generate more generalisable knowledge about the role of AI in medicine. ETHICS AND DISSEMINATION The Bern Cantonal Ethics Committee considered the study exempt from full ethical review (BASEC No: Req-2023-01396). All methods will be conducted in accordance with relevant guidelines and regulations. Participation is voluntary and informed consent will be obtained. Results will be published in peer-reviewed scientific medical journals. Authorship will be determined according to the International Committee of Medical Journal Editors guidelines

    Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals

    Full text link
    Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare

    Human-AI teaming: leveraging transactive memory and speaking up for enhanced team effectiveness

    Get PDF
    In this prospective observational study, we investigate the role of transactive memory and speaking up in human-AI teams comprising 180 intensive care (ICU) physicians and nurses working with AI in a simulated clinical environment. Our findings indicate that interactions with AI agents differ significantly from human interactions, as accessing information from AI agents is positively linked to a team’s ability to generate novel hypotheses and demonstrate speaking-up behavior, but only in higher-performing teams. Conversely, accessing information from human team members is negatively associated with these aspects, regardless of team performance. This study is a valuable contribution to the expanding field of research on human-AI teams and team science in general, as it emphasizes the necessity of incorporating AI agents as knowledge sources in a team’s transactive memory system, as well as highlighting their role as catalysts for speaking up. Practical implications include suggestions for the design of future AI systems and human-AI team training in healthcare and beyond

    Emergency at 35’000 Ft.: How Cockpit and Cabin Crews Lead Each Other to Safety

    Get PDF
    Many aircraft accidents have illustrated the catastrophic consequences of ineffective leadership. However, the optimal form of leadership during emergencies on board is not yet fully explored, particularly not with regards to its influence on decision making. Several authors have studied decision making errors in the cockpit, but to our knowledge so far, nobody has considered the role of the cabin crew, who in these stressful and challenging circumstances have to closely collaborate with pilots despite obvious differences in their training and culture. This study investigates the influence of collective leadership on the quality of decision making by observing 84 cockpit and cabin crews (N=504) live during a simulated emergency. Results indicate that collective leadership strongly correlates with the quality of the decision and crew performance. To conclude, we discuss the implications of those results for decision making in aviation and recommend changes in the design and content of CRM training

    Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare

    No full text
    The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.ISSN:0272-4332ISSN:1539-692
    corecore