17 research outputs found
THE NEW DREAM TEA(I)M? Rethinking Human-AI Collaboration based on Human Teamwork
The continuing rise of artificial intelligence (AI) creates a new frontier of information systems that has the potential to change the future of work. Humans and AI are set to complete tasks as a team, using their complementary strengths. Previous research investigated several aspects of human-AI collaboration, such as the impact of human-AI teams on performance and how AI can be designed to complement the human teammate. However, experiments are suffering from a lack of comparability due to the unlimited configurations, which ultimately limits their implications. In this study, we develop an overarching framework for experiments on human-AI collaboration, using human teamwork as a theoretical lens. Our framework provides a novel, temporal structure for the research domain. Thereby, emerging topics can be clustered sequentially
On the Influence of Explainable AI on Automation Bias
Artificial intelligence (AI) is gaining momentum, and its importance for the future of work in many areas, such as medicine and banking, is continuously rising. However, insights on the effective collaboration of humans and AI are still rare. Typically, AI supports humans in decision-making by addressing human limitations. However, it may also evoke human bias, especially in the form of automation bias as an over-reliance on AI advice. We aim to shed light on the potential to influence automation bias by explainable AI (XAI). In this pre-test, we derive a research model and describe our study design. Subsequentially, we conduct an online experiment with regard to hotel review classifications and discuss first results. We expect our research to contribute to the design and development of safe hybrid intelligence systems
Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task Satisfaction
Recent work has proposed artificial intelligence (AI) models that can learn to decide whether to make a prediction for an instance of a task or to delegate it to a human by considering both parties\u27 capabilities. In simulations with synthetically generated or context-independent human predictions, delegation can help improve the performance of human-AI teams -- compared to humans or the AI model completing the task alone. However, so far, it remains unclear how humans perform and how they perceive the task when they are aware that an AI model delegated task instances to them. In an experimental study with 196 participants, we show that task performance and task satisfaction improve through AI delegation, regardless of whether humans are aware of the delegation. Additionally, we identify humans\u27 increased levels of self-efficacy as the underlying mechanism for these improvements in performance and satisfaction. Our findings provide initial evidence that allowing AI models to take over more management responsibilities can be an effective form of human-AI collaboration in workplaces
A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
Research in artificial intelligence (AI)-assisted decision-making is experiencing tremendous growth with a constantly rising number of studies evaluating the effect of AI with and without techniques from the field of explainable AI (XAI) on human decision-making performance. However, as tasks and experimental setups vary due to different objectives, some studies report improved user decision-making performance through XAI, while others report only negligible effects. Therefore, in this article, we present an initial synthesis of existing research on XAI studies using a statistical meta-analysis to derive implications across existing research. We observe a statistically positive impact of XAI on users\u27 performance. Additionally, the first results indicate that human-AI decision-making tends to yield better task performance on text data. However, we find no effect of explanations on users\u27 performance compared to sole AI predictions. Our initial synthesis gives rise to future research investigating the underlying causes and contributes to further developing algorithms that effectively benefit human decision-makers by providing meaningful explanations
Towards Effective Human-AI Decision-Making: The Role of Human Learning in Appropriate Reliance on AI Advice
The true potential of human-AI collaboration lies in exploiting the complementary capabilities of humans and AI to achieve a joint performance superior to that of the individual AI or human, i.e., to achieve complementary team performance (CTP). To realize this complementarity potential, humans need to exercise discretion in following AI’s advice, i.e., appropriately relying on the AI’s advice. While previous work has focused on building a mental model of the AI to assess AI recommendations, recent research has shown that the mental model alone cannot explain appropriate reliance. We hypothesize that, in addition to the mental model, human learning is a key mediator of appropriate reliance and, thus, CTP. In this study, we demonstrate the relationship between learning and appropriate reliance in an experiment with 100 participants. This work provides fundamental concepts for analyzing reliance and derives implications for the effective design of human-AI decision-making
Overcoming Anchoring Bias: The Potential of AI and XAI-based Decision Support
Information systems (IS) are frequently designed to leverage the negative effect of anchoring bias to influence individuals’ decision-making (e.g., by manipulating purchase decisions). Recent advances in Artificial Intelligence (AI) and the explanations of its decisions through explainable AI (XAI) have opened new opportunities for mitigating biased decisions. So far, the potential of these technological advances to overcome anchoring bias remains widely unclear. To this end, we conducted two online experiments with a total of N=390 participants in the context of purchase decisions to examine the impact of AI and XAI-based decision support on anchoring bias. Our results show that AI alone and its combination with XAI help to mitigate the negative effect of anchoring bias. Ultimately, our findings have implications for the design of AI and XAI-based decision support and IS to overcome cognitive biases
On the Effect of Contextual Information on Human Delegation Behavior in Human-AI collaboration
The constantly increasing capabilities of artificial intelligence (AI) open
new possibilities for human-AI collaboration. One promising approach to
leverage existing complementary capabilities is allowing humans to delegate
individual instances to the AI. However, enabling humans to delegate instances
effectively requires them to assess both their own and the AI's capabilities in
the context of the given task. In this work, we explore the effects of
providing contextual information on human decisions to delegate instances to an
AI. We find that providing participants with contextual information
significantly improves the human-AI team performance. Additionally, we show
that the delegation behavior changes significantly when participants receive
varying types of contextual information. Overall, this research advances the
understanding of human-AI interaction in human delegation and provides
actionable insights for designing more effective collaborative systems