51 research outputs found

    THE NEW DREAM TEA(I)M? Rethinking Human-AI Collaboration based on Human Teamwork

    Get PDF
    The continuing rise of artificial intelligence (AI) creates a new frontier of information systems that has the potential to change the future of work. Humans and AI are set to complete tasks as a team, using their complementary strengths. Previous research investigated several aspects of human-AI collaboration, such as the impact of human-AI teams on performance and how AI can be designed to complement the human teammate. However, experiments are suffering from a lack of comparability due to the unlimited configurations, which ultimately limits their implications. In this study, we develop an overarching framework for experiments on human-AI collaboration, using human teamwork as a theoretical lens. Our framework provides a novel, temporal structure for the research domain. Thereby, emerging topics can be clustered sequentially

    The Problem of the Automation Bias in the Public Sector: A Legal Perspective

    Get PDF
    The automation bias describes the phenomenon, proven in behavioural psychology, that people place excessive trust in the decision suggestions of machines. The law currently sees a dichotomy - and covers only fully automated decisions, and not those involving human decision makers at any stage of the process. However, the widespread use of such systems, for example to inform decisions in education or benefits administration, creates a leverage effect and increases the number of people affected. Particularly in environments where people routinely have to make a large number of similar decisions, the risk of automation bias increases. As an example, automated decisions providing suggestions for job placements illustrate the particular challenges of decision support systems in the public sector. So far, the risks have not been sufficiently addressed in legislation, as the analysis of the GDPR and the draft Artificial Intelligence Act show. I argue for the need for regulation and present initial approaches

    Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the XAI Playbook

    Full text link
    Deployed artificial intelligence (AI) often impacts humans, and there is no one-size-fits-all metric to evaluate these tools. Human-centered evaluation of AI-based systems combines quantitative and qualitative analysis and human input. It has been explored to some depth in the explainable AI (XAI) and human-computer interaction (HCI) communities. Gaps remain, but the basic understanding that humans interact with AI and accompanying explanations, and that humans' needs -- complete with their cognitive biases and quirks -- should be held front and center, is accepted by the community. In this paper, we draw parallels between the relatively mature field of XAI and the rapidly evolving research boom around large language models (LLMs). Accepted evaluative metrics for LLMs are not human-centered. We argue that many of the same paths tread by the XAI community over the past decade will be retread when discussing LLMs. Specifically, we argue that humans' tendencies -- again, complete with their cognitive biases and quirks -- should rest front and center when evaluating deployed LLMs. We outline three developed focus areas of human-centered evaluation of XAI: mental models, use case utility, and cognitive engagement, and we highlight the importance of exploring each of these concepts for LLMs. Our goal is to jumpstart human-centered LLM evaluation.Comment: Accepted to CHI 2023 workshop on Generative AI and HC

    Human-AI Collaboration in Content Moderation: The Effects of Information Cues and Time Constraints

    Get PDF
    An extremely large amount of user-generated content is produced by users worldwide every day with the rapid development of online social media. Content moderation has emerged to ensure the quality of posts on various social media platforms. This process typically demands collaboration between humans and AI because of the complementarity of the two agents in different facets. Wondering how AI can better assist humans to make final judgment in the “machine-in-the-loop” paradigm, we propose a lab experiment to explore the influence of different types of cues provided by AI through a nudging approach as well as time constraints on human moderators’ performance. The proposed study contributes to the literature on the AI-assisted decision-making pattern, and helps social media platforms in creating an effective human-AI collaboration framework for content moderation

    Fair Algorithms in Organizations: A Performative-Sensemaking Model

    Get PDF
    The past few years have seen an unprecedented explosion of interest in fair machine learning algorithms. Such algorithms are increasingly being deployed to improve fairness in high-stakes decisions in organizations, such as hiring and risk assessments. Yet, despite early optimism, recent empirical studies suggest that the use of fair algorithms is highly unpredictable and may not necessarily enhance fairness. In this paper, we develop a conceptual model that seeks to unpack the dynamic sensemaking and sensegiving processes associated with the use of fair algorithms in organizations. By adopting a performative-sensemaking lens, we aim to systematically shed light on how the use of fair algorithms can produce new normative realities in organizations, i.e. new ways to perform fairness. The paper contributes to the growing literature on algorithmic fairness and practice-based studies of IS phenomena
    • …
    corecore