7 research outputs found

    Diagnostic Doubt and Artificial Intelligence: An Inductive Field Study of Radiology Work

    Get PDF
    Technological developments in emerging AI technologies are assumed to further routinize and improve the efficiency of decision making tasks, even in professional contexts such as medical diagnosis, human resource management, and criminal justice. We have little research on how AI technologies are actually used and adopted in practice. Prior research on technology in organizations documents a gap between the expectations for new technology and its actual use in practice. We conducted a comparative field study of three sections in a Department of Radiology in a major US hospital, whereby new and existing AI tools were being used and experimented with. In contrast to expectations about AI tools, our study reveals how such tools can lead routine professional decision making tasks to become nonroutine, as they increased ambiguity and decision makers had to work to reduce it. This is particularly challenging since the costs of dealing with ambiguity – increased time to diagnose – were often weighed against the benefits of such ambiguity (potentially more accurate diagnoses). This study contributes to literatures related to technology, work, and organizations, as well as the role of ambiguity in professionals’ knowledge work

    To engage or not to engage with AI for critical judgments : how professionals deal with opacity when using AI for medical diagnosis

    Get PDF
    rtificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet, gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major U.S. hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three) did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices—practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call unengaged “augmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literature on AI adoption in knowledge work

    Artificial Intelligence in Practice

    No full text
    Technological advances in artificial intelligence (AI) are promising continuous improvements in problem-solving, perception, and reasoning that are edging closer to human capabilities. AI technologies are raising important questions regarding fundamental issues of organizing, especially regarding the impact of AI on professionals and their work practices. I examine the work practices of professionals facing growing availability of AI tools in their field: medical diagnosis. I draw on ethnographic field data collected across four radiology units within at a major hospital in the United States at the cutting-edge of adopting and evaluating diagnostic AI tools. My data collection and analysis spans processes of developing, evaluating, adopting, and using numerous AI tools over time and in multiple diagnostic contexts. In one chapter, I compare four theoretical approaches to studying technology in organizational processes, answering the question, how does each perspective account for material agency? In the following chapter, I investigate how are professionals using AI tools in their judgment forming processes? I uncover how physicians were experiencing ambiguity throughout their process, and even more so after viewing AI results. I unpack the ways in which they manage this surge in ambiguity and its impact on how AI results are incorporated (or not) into their final diagnosis. These findings contribute to literatures of augmentation, ambiguity, and how professionals experience opaque technologies. In the final chapter, I ask how are managers evaluating which AI tools to adopt in their organizations? I shed light on how organizational leaders are evaluating the potential opportunities and challenges of adopting AI tools and addressing new knowledge issues that emerge. This chapter contributes to literatures on ground truth, technology evaluation practices, and studies of sociomateriality. Overall, the findings of this project illuminate critical challenges to the adoption and evaluation of AI tools that must be understood and addressed if professionals, organizations, and society is to gain the full extent of the transformative promise of AI technologies

    Teachers\u27 Perceptions and Implementation of All-Ed Group Learning in the Classroom

    No full text
    Despite evidence that teachers value group learning, research shows variability in its implementation. All Learners Learning Every Day (ALL-ED) is a mastery-oriented professional development program (PD) that trains teachers to effectively implement group learning strategies and self-regulated learning (SRL) in the differentiated classroom. Using a constructivist qualitative paradigm, semi-structured interviews were conducted in an urban high school to examine nine teachers’ experiences using ALL-ED group learning routines. Results showed that all interviewees used ALL-ED group learning. While few incorporated other types of group learning routines into the classroom, some used independent work in concert with group learning. The teachers perceived they played a principal role in implementation and made informed choices about student grouping and choice of routines. Two broad themes – internal and external supports – emerged in response to the second research question. Together, they informed, influenced, and motivated teachers’ decisions to continue using ALL-ED in their classrooms. Future research is needed to explore the quality of PD for teachers and the effect of student preparation and training on group learning implementation

    Is AI Ground Truth Really True? The Dangers of Training and Evaluating AI Tools Based on Experts’ Know-What

    No full text
    Organizational decision-makers need to evaluate AI tools in light of increasing claims that such tools out-perform human experts. Yet, measuring the quality of knowledge work is challenging, raising the question of how to evaluate AI performance in such contexts. We investigate this question through a field study of a major U.S. hospital, observing how managers evaluated five different machine-learning (ML) based AI tools. Each tool reported high performance according to standard AI accuracy measures, which were based on ground truth labels provided by qualified experts. Trying these tools out in practice, however, revealed that none of them met expectations. Searching for explanations, managers began confronting the high uncertainty of experts’ know-what knowledge captured in ground truth labels used to train and validate ML models. In practice, experts address this uncertainty by drawing on rich know-how practices, which were not incorporated into these ML-based tools. Discovering the disconnect between AI’s know-what and experts’ know-how enabled managers to better understand the risks and benefits of each tool. This study shows dangers of treating ground truth labels used in ML models objectively when the underlying knowledge is uncertain. We outline implications of our study for developing, training, and evaluating AI for knowledge work
    corecore