19 research outputs found

    Examining Trust and Reliance in Collaborations between Humans and Automated Agents

    Get PDF
    Human trust and reliance in artificial agents is critical to effective collaboration in mixed human computer teams. Understanding the conditions under which humans trust and rely upon automated agent recommendations is important as trust is one of the mechanisms that allow people to interact effectively with a variety of teammates. We conducted exploratory research to investigate how personality characteristics and uncertainty conditions affect human-machine interactions. Participants were asked to determine if two images depicted the same or different people, while simultaneously considering the recommendation of an automated agent. Results of this effort demonstrated a correlation between judgements of agent expertise and user trust. In addition, we found that in conditions of high and low uncertainty, the decision outcomes of participants moved significantly in the direction of the agent’s recommendation. Differences in reported trust in the agent were observed in individuals with low and high levels of extraversion

    Towards the Conception of a Virtual Collaborator : Extended Abstract

    Get PDF

    Investigating Conformity and the Role of Personality in a Visual Decision Task with Humanoid Robot Peers

    Get PDF
    Effective implementation of mixed initiative teams, where humans work alongside machines, requires increased understanding of the decision-making process and the role of social influence exerted by non-human peers. Conformity—the act of adjusting attitudes, beliefs, or behaviors to those of another—is considered to be the strongest of these social pressures. Previous studies have attempted to understand conformity with humans interacting with a group of robots, but these have failed to identify satisfactory explanations for inconsistent findings. Grounded in trait-activation theory, we propose that personality is a critical factor that needs to be considered. In this effort, we recreated the famous social psychology experiment by Solomon Asch and conducted a single condition study to explore the effects of social influence on decision making. Our study results showed conformity with robot peers did occur. Moreover, scores on the Openness personality trait were a significant predictor of conformity

    Trusting a Humanoid Robot : Exploring Personality and Trusting Effects in a Human-Robot Partnership

    Get PDF
    Research on trust between humans and machines has primarily investigated factors relating to environmental or system characteristics, largely neglecting individual differences that play an important role in human behavior and cognition. This study examines the role of the Big Five personality traits on trust in a partnership between a human user and a humanoid robot. A wizard of oz methodology was used in an experiment to simulate an artificially intelligent robot that could be leveraged as a partner to complete a life or death survival simulation. Eye-tracking was employed to measure system utilization and validated psychometric instruments were used to measure trust and personality traits. Results suggest that individuals scoring high on the openness personality trait may have greater trust in a humanoid robot partner than those with low scores in the openness personality dimension

    Do We Blame it on the Machine? Task Outcome and Agency Attribution in Human-Technology Collaboration

    Get PDF
    With the growing functionality and capability of technology in human-technology interaction, humans are no longer the only autonomous entity. Automated machines increasingly play the role of agentic teammates, and through this process, human agency and machine agency are constructed and negotiated. Previous research on “Computers are Social Actors (CASA)” and self-serving bias suggest that humans might attribute more technology agency and less human agency when the interaction outcome is undesirable, and vice versa. We conducted an experiment to test this proposition by manipulating task outcome of a game co-played by a user and a smartphone app, and found partially contradictory results. Further, user characteristics, sociability in particular, moderated the effect of task outcome on agency attribution, and affected user experience and behavioral intention. Such findings suggest a complex mechanism of agency attribution in human-technology collaboration, which has important implications for emerging socio-ethical and socio-technical concerns surrounding intelligent technology

    On Conversational Agents in Information Systems Research: Analyzing the Past to Guide Future Work

    Get PDF
    Conversational agents (CA), i.e. software that interacts with its users through natural language, are becoming increasingly prevalent in everyday life as technological advances continue to significantly drive their capabilities. CA exhibit the potential to support and collaborate with humans in a multitude of tasks and can be used for innovation and automation across a variety of business functions, such as customer service or marketing and sales. Parallel to the increasing popularity in practice, IS researchers have engaged in studying a variety of aspects related to CA in the last few years, applying different research methods and producing different types of theories. In this paper, we review 36studies to assess the status quo of CA research in IS, identify gaps regarding both the studied aspects as well as applied methods and theoretical approaches, and propose directions for future work in this research area

    An Empirical Study Exploring Difference in Trust of Perceived Human and Intelligent System Partners

    Get PDF
    Intelligent systems are increasingly relied on as partners used to make decisions in business contexts. With advances in artificial intelligence technology and system interfaces, it is increasingly difficult to distinguish these system partners from their human counterparts. Understanding the role of perceived humanness and its impact on trust in these situations is important as trust is widely recognized as critical to system adoption and effective collaboration. We conducted an exploratory study involving individuals collaborating with an intelligent system partner to make several critical decisions. Measured trust levels and survey responses were analyzed. Results suggest that greater trust is experienced when the partner is perceived to be human. Additionally, the attribution of partners possessing expert knowledge drove perceptions of humanness. Partners viewed to adhere to strict syntactical requirements, displaying quick response times, having unnatural conversational tone, and unrealistic availability contributed to perceptions of partners being machine-like

    Can we Help the Bots? Towards an Evaluation of their Performance and the Creation of Human Enhanced Artifact for Emotions De-escalation

    Get PDF
    We propose a hybrid intelligence socio-technical artifact that identifies a threshold where the chatbot requires human intervention in order to continue to perform at an appropriate level to achieve the pre-defined objective of the system. We leverage the Yield Shift Theory of Satisfaction, the Intervention Theory and the Nudge Theory to develop meta requirements and design principles for this system. We discuss the first iteration of implementation and evaluation of the artifact components

    Working with ELSA – How an Emotional Support Agent Builds Trust in Virtual Teams

    Get PDF
    Virtual collaboration is an increasing part of daily life for many employees. Despite many advantages, however, virtual collaborative work can lead to a lack of trust among virtual team members, e.g., due to spatial separation and little social interaction. Previous findings indicated that emotional support provided by a conversational agent (CA) can impact human-agent trust and the perceived social presence. We developed an emotional support agent called ELSA and conducted a between-subject online experiment to examine how CAs can provide emotional support in order to increase the level of trust among colleagues in virtual teams. We found that human-agent trust positively influences the level of calculus-based trust among team members and increases team cohesion, whereas perceived anthropomorphism and social presence towards a CA seems to be less important for trust among team members
    corecore