27 research outputs found

    Investigating the relationship between AI and trust in human-AI collaboration

    Get PDF
    With the increasing development of information technology, the implementation of artificial intelligence (AI) has been widespread and has empowered virtual team collaboration by increasing collaboration efficiency and achieving superior collaboration results in recent years. Trust in the process of human-AI interaction has been identified as a challenge for team collaboration in this context. However, little research has investigated the relationship between human-AI interaction and trust. This study proposes a theoretical model of the relationship between human-AI interaction and team members’ trust during collaboration processes. We conclude that team members’ cognitive and emotional perceptions during the interaction process are associated with their trust towards AI. Moreover, the relationship could also be moderated by the specific AI implementation traits. Our model provides a holistic view of human-AI interaction and its association with team members’ trust in the context of team collaboration

    Do We Blame it on the Machine? Task Outcome and Agency Attribution in Human-Technology Collaboration

    Get PDF
    With the growing functionality and capability of technology in human-technology interaction, humans are no longer the only autonomous entity. Automated machines increasingly play the role of agentic teammates, and through this process, human agency and machine agency are constructed and negotiated. Previous research on “Computers are Social Actors (CASA)” and self-serving bias suggest that humans might attribute more technology agency and less human agency when the interaction outcome is undesirable, and vice versa. We conducted an experiment to test this proposition by manipulating task outcome of a game co-played by a user and a smartphone app, and found partially contradictory results. Further, user characteristics, sociability in particular, moderated the effect of task outcome on agency attribution, and affected user experience and behavioral intention. Such findings suggest a complex mechanism of agency attribution in human-technology collaboration, which has important implications for emerging socio-ethical and socio-technical concerns surrounding intelligent technology

    Consumer Adoption of Artificial Intelligence: A Review of Theories and Antecedents

    Get PDF
    Recently, people are increasingly adopting technologies powered by artificial intelligence (AI) in their everyday lives. Several researchers have investigated this phenomenon using several theoretical perspectives to explain the motivations behind such behaviour. Our paper reviews this body of knowledge to highlight the technologies, theories, and antecedents of AI adoption investigated this far in academic research. By analysing publications found in Harzing's Journal Quality List, this paper identifies 52 publications on user adoption of AI, 198 antecedents, and 36 theoretical perspectives used to explain user adoption of AI. The most widely used theoretical perspectives in this area of research are the technology acceptance model (TAM) and the unified theory of acceptance and use of technology (UTAUT). Meanwhile, perceived usefulness, perceived ease of use, and trust are the most studied antecedents. Finally, we discuss the implications of these findings for future research on AI adoption by consumers

    The unrealized potential of technology in selection assessment = El potencial de la tecnología no empleado en la evaluación de la selección

    Get PDF
    Technological advances in assessment have radically changed the landscape of employee selection. This paper focuses on three areas where the promise of those technological changes remains undelivered. First, while new ways of measuring constructs are being implemented, new constructs are not being assessed, nor is it always clear what constructs the new ways are measuring. Second, while technology in assessment leads to much greater efficiency, there are also untested assumptions about effectiveness and fairness. There is little consideration of potential negative byproducts of contextual enhancement, removing human judges, and collecting more data. Third, there has been insufficient consideration of the changed nature of work due to technology when assessing candidates. Virtuality, contingent work arrangements, automation, transparency, and globalization should all be having greater impact on selection assessment design. A critique of the current state of affairs is offered and illustrations of future directions with regard to each aspect is provided

    Trusting Robots in Teams: Examining the Impacts of Trusting Robots on Team Performance and Satisfaction

    Get PDF
    Despite the widespread use of robots in teams, there is still much to learn about what facilitates better performance in these teams working with robots. Although trust has been shown to be a strong predictor of performance in all-human teams, we do not fully know if trust plays the same critical role in teams working with robots. This study examines how to facilitate trust and its importance on the performance of teams working with robots. A 2 (robot identification vs. no robot identification) × 2 (team identification vs. no team identification) between-subjects experiment with 54 teams working with robots was conducted. Results indicate that robot identification increased trust in robots and team identification increased trust in one’s teammates. Trust in robots increased team performance while trust in teammates increased satisfaction

    Trusting Robots in Teams: Examining the Impacts of Trusting Robots on Team Performance and Satisfaction

    Full text link
    Despite the widespread use of robots in teams, there is still much to learn about what facilitates better performance in these teams working with robots. Although trust has been shown to be a strong predictor of performance in all-human teams, we do not fully know if trust plays the same critical role in teams working with robots. This study examines how to facilitate trust and its importance on the performance of teams working with robots. A 2 (robot identification vs. no robot identification) × 2 (team identification vs. no team identification) between-subjects experiment with 54 teams working with robots was conducted. Results indicate that robot identification increased trust in robots and team identification increased trust in one’s teammates. Trust in robots increased team performance while trust in teammates increased satisfaction.http://doi.org/10.24251/hicss.2019.031Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/145619/1/You and Robert 2019 (Preprint).pd

    The More I Understand it, the Less I Like it: The Relationship Between Understandability and Godspeed Scores for Robotic Gestures

    Get PDF
    This work investigates the relationship between the perception that people develop about a robot and the understandability of the gestures the latter displays. The experiments have involved 30 human observers that have rated 45 robotic gestures in terms of the Godspeed dimensions. At the same time, the observers have assigned a score to 10 possible interpretations (the same interpretations for all gestures). The results show that there is a statistically significant correlation between the understandability of the gestures - measured through an information theoretic approach - and all Godspeed scores. However, the correlation is positive in some cases (Anthropomorphism, Animacy and Perceived Intelligence), but negative in others (Perceived Safety and Likeability). In other words, higher understandability is not necessarily associated with more positive perceptions
    corecore