3,102 research outputs found

    The Drivers of Customer Satisfaction in Interactions with Virtual Agents: Evidence from South Africa

    Get PDF
    The advent of the fourth industrial revolution has increased the application of artificial intelligence in various marketing related activities. Organisations are increasingly using artificial intelligence through virtual agents to aid and facilitate interactions with consumers. Clearly, virtual agents need to be capable of fulfilling customer needs. However, it raises the question of whether they should be cold and unfeeling or should imitate the most innate human qualities. This research therefore set out to examine what impact humanness, perceived agency, trust, and emotionality have on customer satisfaction in interactions with virtual agents, or, more specifically, AI chatbots. A quantitative research design was employed in the study. Data were collected by using a survey questionnaire and a total of 207 respondents were obtained. Data were analysed using IBM SPSS 28, where a linear regression was performed. The results indicate that humanness and perceived agency were significant predictors of customer satisfaction. On the other hand, emotionality and trust were not significant predictors. The results of this research have theoretical and practical implications for both practitioners and researchers

    Judgment of the Humanness of an Interlocutor Is in the Eye of the Beholder

    Get PDF
    Despite tremendous advances in artificial language synthesis, no machine has so far succeeded in deceiving a human. Most research focused on analyzing the behavior of “good” machine. We here choose an opposite strategy, by analyzing the behavior of “bad” humans, i.e., humans perceived as machine. The Loebner Prize in Artificial Intelligence features humans and artificial agents trying to convince judges on their humanness via computer-mediated communication. Using this setting as a model, we investigated here whether the linguistic behavior of human subjects perceived as non-human would enable us to identify some of the core parameters involved in the judgment of an agents' humanness. We analyzed descriptive and semantic aspects of dialogues in which subjects succeeded or failed to convince judges of their humanness. Using cognitive and emotional dimensions in a global behavioral characterization, we demonstrate important differences in the patterns of behavioral expressiveness of the judges whether they perceived their interlocutor as being human or machine. Furthermore, the indicators of interest displayed by the judges were predictive of the final judgment of humanness. Thus, we show that the judgment of an interlocutor's humanness during a social interaction depends not only on his behavior, but also on the judge himself. Our results thus demonstrate that the judgment of humanness is in the eye of the beholder

    Who’s Bad? – The Influence of Perceived Humanness on Users’ Intention to Complain about Conversational Agent Errors to Others

    Get PDF
    The perception of humanness in a conversational agent (CA) has been shown to strongly impact users’ processing and reaction to it. However, it is largely unclear how this perception of humanness influences users’ processing of errors and subsequent intention for negative word-of-mouth (WoM). In this context, we propose two pathways between perceived humanness and negative WoM: a cognitive pathway and an affective pathway. In a 2x2 online experiment with chatbots, we manipulated both the occurrence of errors and the degree of humanlike design. Our findings indicate that perceived humanness effects users\u27 intentions towards negative WoM through the cognitive pathway: users\u27 confirmation of expectations is increased by perceived humanness, reducing negative WoM intentions. However, it has no effect on users’ anger and frustration and does not interact with the effects of errors. For practice, our results indicate that adding humanlike design elements can be a means to reduce negative WoM

    No matter how real: Out-group faces convey less humanness

    Get PDF
    Past research on real human faces has shown that out-group members are commonly perceived as lacking human qualities, which links them to machines or objects. In this study, we aimed to test whether similar out-group effects generalize to artificial faces. Caucasian participants were presented with images of male Caucasian and Indian faces and had to decide whether human traits (naturally and uniquely human) as well as emotions (primary and secondary) could or could not be attributed to them. In line with previous research, we found that naturally human traits and secondary emotions were attributed less often to the out-group (Indian) than to the in-group (Caucasian), and this applied to both real and artificial faces. The findings extend prior research and show that artificial stimuli readily evoke intergroup processes. This has implications for the design of animated characters, suggesting that outgroup faces convey less humanness regardless of how life-like their representation is

    “Look Closer” Anthropomorphic Design and Perception of Anthropomorphism in Conversational Agent Research

    Get PDF
    Conversation agents have been attracting increased attention in IS research and increased adoption in practice. They provide an AI-driven conversation-like interface and tap into the anthropomorphism bias of its users. There has been extensive research on improving this effect for over a decade since increased anthropomorphism leads to increased service satisfaction, trust, and other effects on the user. This work examines the current state of research regarding anthropomorphism and anthropomorphic design to guide future research. It utilizes a modified structured literature analysis to extract and classify the examined constructs and their relationships in the hypotheses of current literature. We provide an overview of current research, highlighting focus areas. Based on our results, we formulate several open research questions and provide the IS community with directions for future research

    Understanding the Impact that Response Failure has on How Users Perceive Anthropomorphic Conversational Service Agents: Insights from an Online Experiment

    Get PDF
    Conversational agents (CAs) have attracted the interest from organizations due to their potential to provide automated services and the feeling of humanlike interaction. Emerging studies on CAs have found that humanness has a positive impact on customer perception and explored approaches for their anthropomorphic design, which comprises both their appearance and behavior. While these studies provide valuable knowledge on how to design humanlike CAs, we still do not sufficiently understand this technology’s limited conversational capabilities and their potentially detrimental impact on user perception. These limitations often lead to frustrated users and discontinued CAs in practice. We address this gap by investigating the impact of response failure, which we understand a CA’s inability to provide a meaningful reply, in a service context. To do so, we draw on the computers are social actors paradigm and the theory of the uncanny valley. Via an experiment with 169 participants, we found that 1) response failure harmed the extent to which people perceived CAs as human and increased their feelings of uncanniness, 2) humanness (uncanniness) positively (negatively) influenced familiarity and service satisfaction, and 3) the response failure had a significant negative impact on user perception yet did not lead to a sharp drop as the uncanny valley theory posits. Thus, our study contributes to better explaining the impact that text-based CAs’ failure to respond has on customer perception and satisfaction in a service context in relation to the agents’ design

    From automata to animate beings: the scope and limits of attributing socialness to artificial agents

    Get PDF
    Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent's visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the “like me” hypothesis, and discuss the key role played by the Theory‐of‐Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long‐term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents

    Affect and believability in game characters:a review of the use of affective computing in games

    Get PDF
    Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions
    corecore