30 research outputs found

    Mr. and Mrs. Conversational Agent - Gender Stereotyping in Judge-Advisor Systems and the Role of Egocentric Bias

    Get PDF
    Current technological advancements of conversational agents (CAs) promise new potentials for human-computer collaborations. Yet, both practitioners and researchers face challenges in designing these information systems, such that CAs not only increase in intelligence but also in effectiveness. Drawing on social response theory as well as literature on trust and judge-advisor systems, we examine the roles of gender stereotyping and egocentric bias in cooperative CAs. Specifically, by conducting an online experiment with 87 participants, we investigate the effects of a CA’s gender and a user’s subjective knowledge in two stereotypical male knowledge fields. The results indicate (1) that female (vs. male) CAs and stereotypical female (vs. male) traits increase a user’s perceived competence of CAs and (2) that an increase in a user’s subjective knowledge decreases trusting intentions in CAs. Thus, our contributions provide new and counterintuitive insights that are crucial for the effectiveness of cooperative CAs

    Automating Crisis Communication in Public Institutions – Towards Ethical Conversational Agents That Support Trust Management

    Get PDF
    To improve disaster relief and crisis communication, public institutions (PIs) such as administrations rely on automation and technology. As one example, the use of conversational agents (CAs) has increased. To ensure that information and advisories are taken up seriously, it is important for PIs to be perceived as a trusted source and a trustworthy point of contact. In this study, we therefore examine how CAs can be applied by PIs to, on the one hand, automate their crisis communication and, on the other hand, maintain or even increase their perceived trustworthiness. We developed two CAs – one equipped with ethical cues in order to be perceived more trustworthy and one without such cues – and started to conduct an online experiment to evaluate the effects. Our first results indicate that applying ethical principles such as fairness, transparency, security and accountability have a positive effect on the perceived trustworthiness of the CA

    Thinking Technology as Human: Affordances, Technology Features, and Egocentric Biases in Technology Anthropomorphism

    Get PDF
    Advanced information technologies (ITs) are increasingly assuming tasks that have previously required human capabilities, such as learning and judgment. What drives this technology anthropomorphism (TA), or the attribution of humanlike characteristics to IT? What is it about users, IT, and their interactions that influences the extent to which people think of technology as humanlike? While TA can have positive effects, such as increasing user trust in technology, what are the negative consequences of TA? To provide a framework for addressing these questions, we advance a theory of TA that integrates the general three-factor anthropomorphism theory in social and cognitive psychology with the needs-affordances-features perspective from the information systems (IS) literature. The theory we construct helps to explain and predict which technological features and affordances are likely: (1) to satisfy users’ psychological needs, and (2) to lead to TA. More importantly, we problematize some negative consequences of TA. Technology features and affordances contributing to TA can intensify users’ anchoring with their elicited agent knowledge and psychological needs and also can weaken the adjustment process in TA under cognitive load. The intensified anchoring and weakened adjustment processes increase egocentric biases that lead to negative consequences. Finally, we propose a research agenda for TA and egocentric biases

    “May I Help You?”: Exploring the Effect of Individuals’ Self-Efficacy on the Use of Conversational Agents

    Get PDF
    Conversational agents (CAs) increasingly permeate our lives and offer us assistance for a myriad of tasks. Despite promising measurable benefits, CA use remains below expectations. To complement prior technology-focused research, this study takes a user-centric perspective and explores an individual’s characteristics and dispositions as a factor influencing CA use. In particular, we investigate how individuals’ self-efficacy, i.e., their belief in their own skills and abilities, affects their decision to seek assistance from a CA. We present the research model and study design for a laboratory experiment. In the experiment, participants complete two tasks embedded in realistic scenarios including websites with integrated CAs – that they might use for assistance. Initial results confirm the influence of individuals’ self-efficacy beliefs on their decision to use CAs. By taking a human-centric perspective and observing actual behavior, we expect to contribute to CA research by exploring a factor likely to drive CA use

    Between Anthropomorphism, Trust, and the Uncanny Valley: a Dual-Processing Perspective on Perceived Trustworthiness and Its Mediating Effects on Use Intentions of Social Robots

    Get PDF
    Designing social robots with the aim to increase their acceptance is crucial for the success of their implementation. However, even though increasing anthropomorphism is often seen as a promising way to achieve this goal, the uncanny valley effect proposes that anthropomorphism can be detrimental to acceptance unless robots are almost indistinguishable from humans. Against this background, we use a dual processing theory approach to investigate whether an uncanny valley of perceived trustworthiness (PT) can be observed for social robots and how this effect differs between the intuitive and deliberate reasoning system. The results of an experiment with four conditions and 227 participants provide support for the uncanny valley effect. Furthermore, mediation analyses suggested that use intention decreases through both reduced intuitive and deliberate PT for medium levels of anthropomorphism. However, for high levels of anthropomorphism (indistinguishable from real human), only intuitive PT determined use intention. Consequently, our results indicate both advantages and pitfalls of anthropomorphic design

    Adaptive Conversational Agents: Exploring the Effect of Individualized Design on User Experience

    Get PDF
    Conversational agents (CA) offer a range of benefits to firms and users, yet user experiences are often unsatisfying. An explanation might be that individual differences of users are only insufficiently addressed in today’s CA design. Drawing on communication accommodation theory, we develop a research model and study design to investigate how adapting CA design to users’ individual characteristics influences the user experience. In particular, we develop text-based CAs (i.e., chatbots) that are adapted to users’ rational/intuitive cognitive style or need for interaction, and compare the user experience to non-adapted CAs. Initial results from our pilot study (n=37) confirm that individualized CA design can enhance the user experience. We expect to contribute to the growing research field of adaptive CA design. Moreover, our results will provide guidance for developers on how to facilitate a pleasing user experience by adapting the CA design to users

    Ambidexterity Through the Lens of Conventions? A Qualitative Study on Personal Virtual Assistants

    Get PDF
    Personal virtual assistants (PVAs) are demanded to effectively fulfil and support employee’s tasks in organizations. Today, PVAs are mainly trusted to take over simple administrative tasks, thus, limiting their potential long-term impact on employees and entire organizations. To overcome this shortcoming, we introduce the pragmatic perspective of the Economics of Conventions (EC) to analyze and understand employees’ plural motives and behaviors that may explain sustained or fragmented potential PVA use in organizations, especially taking the organizational challenge of ambidexterity into account. In doing so, we provide a deepened understanding of PVAs’ capabilities and give propositions for their organizational implementation and use. We also offer new avenues for future research by calling for a more holistic theoretical foundation of organizational artificial intelligence solutions that consider and represent organizations and their employees in their complexity, respectively their plural orders of worth
    corecore