24 research outputs found
Cognitive challenges in human-AI collaboration: Investigating the path towards productive delegation
We study how humans make decisions when they collaborate with an artificial intelligence (AI): each
instance of a classification task could be classified by themselves or by the AI. Experimental results suggest
that humans and AI who work together can outperform the superior AI when it works alone. However,
this only occurred when the AI delegated work to humans, not when humans delegated work to the AI.
The AI profited, even from working with low-performing subjects, but humans did not delegate well. This
bad delegation performance cannot be explained with algorithm aversion. On the contrary, subjects tried to
follow a provided delegation strategy diligently and appeared to appreciate the AI support. However, human
results suffered due to a lack of metaknowledge. They were not able to assess their own capabilities correctly,
which in turn leads to poor delegation decisions. In contrast to reluctance to use AI, lacking metaknowledge
is an unconscious tr
COVID-19-Impfung senkt das Risiko für Infektion, schwere Krankheitsverläufe und Tod
Im Mai 2021 kam es in einem Alten- und Pflegeheim in der Oberpfalz zu einem SARS-CoV-2-Aus-bruch mit einer hohen Anzahl von ImpfdurchbrĂĽchen, woraufhin serologische Untersuchungen und epidemiologische Daten der Bewohnerinnen und Bewohner und dem Pflegepersonal ausgewertet wurden. Ziel dieser Untersuchung war es, das Risiko von Hospitalisierung und Tod bei Geimpften im Vergleich zu Ungeimpften darzustellen.Peer Reviewe
Will Humans-in-the-Loop Become Borgs? Merits and Pitfalls of Working with AI
We analyze how advice from an AI affects complementarities between humans and AI, in particular what humans know that an AI does not know: “unique human knowledge.” In a multi-method study consisting of an analytical model, experimental studies, and a simulation study, our main finding is that human choices converge toward similar responses improving individual accuracy. However, as overall individual accuracy of the group of humans improves, the individual unique human knowledge decreases. Based on this finding, we claim that humans interacting with AI behave like “Borgs,” that is, cyborg creatures with strong individual performance but no human individuality. We argue that the loss of unique human knowledge may lead to several undesirable outcomes in a host of human–AI decision environments. We demonstrate this harmful impact on the “wisdom of crowds.” Simulation results based on our experimental data suggest that groups of humans interacting with AI are far less effective as compared to human groups without AI assistance. We suggest mitigation techniques to create environments that can provide the best of both worlds (e.g., by personalizing AI advice). We show that such interventions perform well individually as well as in wisdom of crowds settings
WILL HUMANS-IN-THE-LOOP BECOME BORGS? MERITS AND PITFALLS OF WORKING WITH AI
We analyze how advice from an AI affects complementarities between humans and AI, in particular what humans know that an AI does not know: unique human knowledge. In a multi-method study consisting of an analytical model, experimental studies, and a simulation study, our main finding is that human choices converge toward similar responses improving individual accuracy. However, as overall individual accu-racy of the group of humans improves, the individual unique human knowledge decreases. Based on this finding, we claim that humans interacting with AI behave like Borgs, that is, cyborg creatures with strong individual performance but no human individuality. We argue that the loss of unique human knowledge may lead to several undesirable outcomes in a host of human-AI decision environments. We demonstrate this harmful impact on the wisdom of crowds. Simulation results based on our experimental data suggest that groups of humans interacting with AI are far less effective as compared to human groups without AI assistance. We suggest mitigation techniques to create environments that can provide the best of both worlds (e.g., by personalizing AI advice). We show that such interventions perform well individually as well as in wisdom of crowds settings
Cognitive Challenges in Human-Artificial Intelligence Collaboration: Investigating the Path Toward Productive Delegation
We study how humans make decisions when they collaborate with an artificial intelligence (AI) in a setting where humans and the AI perform classification tasks. Our experimental results suggest that humans and AI who work together can outperform the AI that outperforms humans when it works on its own. However, the combined performance improves only when the AI delegates work to humans but not when humans delegate work to the AI. The AI's delegation performance improved even when it delegated to low-performing subjects; by contrast, humans did not delegate well and did not benefit from delegation to the AI. This bad delegation performance cannot be explained with some kind of algorithm aversion. On the contrary, subjects acted rationally in an internally consistent manner by trying to follow a proven delegation strategy and appeared to appreciate the AI support. However, human performance suffered as a result of a lack of metaknowledge-that is, humans were not able to assess their own capabilities correctly, which in turn led to poor delegation decisions. Lacking metaknowledge, in contrast to reluctance to use AI, is an unconscious trait. It fundamentally limits how well human decision makers can collaborate with AI and other algorithms. The results have implications for the future of work, the design of human-AI collaborative environments, and education in the digital age
WIll humans-in-the-loop become borgs?: merits and pitfalls of working with AI
We analyze how advice from an AI affects complementarities between humans and AI, in particular what humans know that an AI does not know: “unique human knowledge.” In a multi-method study consisting of an analytical model, experimental studies, and a simulation study, our main finding is that human choices converge toward similar responses improving individual accuracy. However, as overall individual accuracy of the group of humans improves, the individual unique human knowledge decreases. Based on this finding, we claim that humans interacting with AI behave like “Borgs,” that is, cyborg creatures with strong individual performance but no human individuality. We argue that the loss of unique human knowledge may lead to several undesirable outcomes in a host of human–AI decision environments. We demonstrate this harmful impact on the “wisdom of crowds.” Simulation results based on our experimental data suggest that groups of humans interacting with AI are far less effective as compared to human groups without AI assistance. We suggest mitigation techniques to create environments that can provide the best of both worlds (e.g., by personalizing AI advice). We show that such interventions perform well individually as well as in wisdom of crowds settings
Exploring User Heterogeneity in Human Delegation Behavior towards AI
As artificial intelligence (AI) can increasingly be used to support decision-making in various areas, enhancing the understanding of human-AI collaboration is more important than ever. We study delegation between humans and AI as one form of collaboration. Specifically, we investigate whether there exist distinct patterns of human delegation behavior towards AI. In a laboratory experiment, subjects performed an image classification task with 100 images to be classified. For the last 50 images, the treatment group had the option to delegate images to an AI. By performing a cluster analysis on this treatment, we find four types of delegation behavior towards AI that differ in their overall performance, delegation rate, and their accuracy of self-assessment. Our results motivate further research on delegation of humans to AI and act as a starting point to research on human collaboration with AI on an individual level
Network Properties of Folksonomies
Social resource sharing systems like YouTube and del.icio.us have acquired a large number of users within the last few years. They provide rich resources for data analysis, information retrieval, and knowledge discovery applications. A first step towards this end is to gain better insights into content and structure of these systems. In this paper, we will analyse the main network characteristics of two of these systems. We consider their underlying data structures – so-called folksonomies – as tri-partite hypergraphs, and adapt classical network measures like characteristic path length and clustering coefficient to them. Subsequently, we introduce a network of tag cooccurrence and investigate some of its statistical properties, focusing on correlations in node connectivity and pointing out features that reflect emergent semantics within the folksonomy. We show that simple statistical indicators unambiguously spot non-social behavior such as spam