231 research outputs found

    Designing a Chatbot Social Cue Configuration System

    Get PDF
    Social cues (e.g., gender, age) are important design features of chatbots. However, choosing a social cue design is challenging. Although much research has empirically investigated social cues, chatbot engineers have difficulties to access this knowledge. Descriptive knowledge is usually embedded in research articles and difficult to apply as prescriptive knowledge. To address this challenge, we propose a chatbot social cue configuration system that supports chatbot engineers to access descriptive knowledge in order to make justified social cue design decisions (i.e., grounded in empirical research). We derive two design principles that describe how to extract and transform descriptive knowledge into a prescriptive and machine-executable representation. In addition, we evaluate the prototypical instantiations in an exploratory focus group and at two practitioner symposia. Our research addresses a contemporary problem and contributes with a generalizable concept to support researchers as well as practitioners to leverage existing descriptive knowledge in the design of artifacts

    Is Making Mistakes Human? On the Perception of Typing Errors in Chatbot Communication

    Get PDF
    The increasing application of Conversational Agents (CAs) changes the way customers and businesses interact during a service encounter. Research has shown that CA equipped with social cues (e.g., having a name, greeting users) stimulates the user to perceive the interaction as human-like, which can positively influence the overall experience. Specifically, social cues have shown to lead to increased customer satisfaction, perceived service quality, and trustworthiness in service encounters. However, many CAs are discontinued because of their limited conversational ability, which can lead to customer dissatisfaction. Nevertheless, making errors and mistakes can also be seen as a human characteristic (e.g., typing errors). Existing research on human-computer interfaces lacks in the area of CAs producing human-like errors and their perception in a service encounter situation. Therefore, we conducted a 2x2 online experiment with 228 participants on how CAs typing errors and CAs human-like behavior treatments influence user’s perception, including perceived service quality

    DESIGN FOR FAST REQUEST FULFILLMENT OR NATURAL INTERACTION? INSIGHTS FROM AN EXPERIMENT WITH A CONVERSATIONAL AGENT

    Get PDF
    Conversational agents continue to permeate our lives in different forms, such as virtual assistants on mobile devices or chatbots on websites and social media. The interaction with users through natural language offers various aspects for researchers to study as well as application domains for practitioners to explore. In particular their design represents an interesting phenomenon to investigate as humans show social responses to these agents and successful design remains a challenge in practice. Compared to digital human-to-human communication, text-based conversational agents can provide complementary, preset answer options with which users can conveniently and quickly respond in the interaction. However, their use might also decrease the perceived humanness and social presence of the agent as the user does not respond naturally by thinking of and formulating a reply. In this study, we conducted an experiment with N=80 participants in a customer service context to explore the impact of such elements on agent anthropomorphism and user satisfaction. The results show that their use reduces perceived humanness and social presence yet does not significantly increase service satisfaction. On the contrary, our findings indicate that preset answer options might even be detrimental to service satisfaction as they diminish the natural feel of human-CA interaction

    THE EFFECT OF CHATBOTS RESPONSE LATENCY ON USERS’ TRUST

    Get PDF
    Chatbots are widely used as conversational agents and being designed using anthropomorphic design guidelines. However, response latency (response latency is the time it takes for a chatbot/person to provide a response immediately after receiving a message) as an anthropomorphic design cue in a conversational user interface has not been the subject of many studies. Even though the system's response latency has an undeniable effect on users' satisfaction and performance, the connection between users' trust and chatbots' response time is not addressed. A critical reason that executives are reluctant to implement chatbots for their businesses is the user adoption hesitancy. Customers and users are unwilling to engage with a chatbot because they do not trust chatbot. Therefore, this study used empirical data collected from chatbot users to investigate the effect of chatbots response latency on users’ trust – cognitive and affective trust. The results of this study suggest that dynamically delaying chatbot response increases users’ cognitive trust but has no significant impact on users’ affective trust. General sentiment analysis on chatbot users’ responses to an open-ended question that describes their experiences interacting with chatbots suggests that dynamically delaying chatbot response produces higher positive sentiment and trust sentiment than near-instant chatbot response. Other findings are discussed and some ideas for future research are also presented in this paper

    AI technologies & value co-creation in luxury context

    Get PDF
    The aim of the paper is to contribute to the literature on the conceptualization of technology as an operant resource and the role of Artificial Intelligence (AI) in value co-creation processes. Resource integration and interaction determine such co-creation, however the issue pivots on whether AI is effectively able to co-create value as an operant resource. With an integrated framework based on the Service Science (SS), the Viable Systems Approach (VSA) & the Variety Information Model (VIM), the Authors show how to the various kinds of AI technology corresponds a diverse level of co-creation. Our (conceptual) study, highlights how AI (e.g. chatbot) with its client profiling capacity achieves consonance in a luxury goods context, thus interpreting customer expectations. At the same time, the man-machine virtuous circuit qualifies the shift from AI (a combination of various technologies with cognitive abilities – listening, comprehending, acting, learning and at times speaking – capable of matching human intelligence) to the more potent IA Intelligence Augmentation

    You are an Idiot! – How Conversational Agent Communication Patterns Influence Frustration and Harassment

    Get PDF
    Conversational Agents (CA) in the form of digital assistants on smartphones, chatbots on social media, or physical embodied systems are an increasingly often applied new form of user interfaces for digital systems. The human-like design of CAs (e.g., having names, greeting users, and using self-references) leads to users subconsciously reacting to them as they were interacting with a human. In recent research, it has been shown that this social component of interacting with a CA leads to various benefits, such as increased service satisfaction, enjoyment, and trust. However, numerous CAs were discontinued because of inadequate responses to user requests or only making errors because of the limited functionalities and knowledge of a CA, which can lead to frustration. Therefore, investigating the causes of frustration and other related emotions and reactions highly relevant. Against this background, this study investigates via an online experiment with 169 participants how different communication patterns influence user’s perception, frustration, and harassment behavior of an error producing CA

    How to Make chatbots productive – A user-oriented implementation framework

    Get PDF
    Many organizations are pursuing the implementation of chatbots to enable automation of service processes. However, previous research has highlighted the existence of practical setbacks in the implementation of chatbots in corporate environments. To gain practical insights on the issues related to the implementation processes from several perspectives and stages of deployment, we conducted semi-structured interviews with developers and experts of chatbot development. Using qualitative content analysis and based on a review of literature on human computer interaction (HCI), information systems (IS), and chatbots, we present an implementation framework that supports the successful deployment of chatbots and discuss the implementation of chatbots through a user-oriented lens. The proposed framework contains 101 guiding questions to support chatbot implementation in an eight-step process. The questions are structured according to the people, activity, context, and technology (PACT) framework. The adapted PACT framework is evaluated through expert interviews and a focus group discussion (FGD) and is further applied in a case study. The framework can be seen as a bridge between science and practice that serves as a notional structure for practitioners to introduce a chatbot in a structured and user-oriented manner

    Match or Mismatch? How Matching Personality and Gender between Voice Assistants and Users Affects Trust in Voice Commerce

    Get PDF
    Despite the ubiquity of voice assistants (VAs), they see limited adoption in the form of voice commerce, an online sales channel using natural language. A key barrier to the widespread use of voice commerce is the lack of user trust. To address this problem, we draw on similarity-attraction theory to investigate how trust is affected when VAs match the user’s personality and gender. We conducted a scenario-based experiment (N = 380) with four VAs designed to have different personalities and genders by customizing only the auditory cues in their voices. The results indicate that a personality match increases trust, while the effect of a gender match on trust is non-significant. Our findings contribute to research by demonstrating that some types of matches between VAs and users are more effective than others. Moreover, we reveal that it is important for practitioners to consider auditory cues when designing VAs for voice commerce

    Scalable Design Evaluation for Everyone! Designing Configuration Systems for Crowd-Feedback Request Generation

    Get PDF
    Design evaluation is an important step during software development to ensure users’ requirements are met. Crowd feedback represents an effective approach to tackling scalability issues of traditional design evaluation methods. Crowd-feedback systems are usually developed for a fixed use case and designers lack knowledge on how to build individual crowd-feedback systems by themselves. Consequently, they are rarely applied in practice. To address this challenge, we propose the design of a configuration system to support designers in creating individual crowd-feedback requests. By conducting expert interviews (N=14) and an exploratory literature review, we derive four design rationales for such configuration systems and propose a prototypical configuration system instantiation. We evaluate this instantiation in exploratory focus groups (N=10). The results show that feedback requesters appreciate guidance. However, there seems to be a trade-off between complexity and flexibility. With our research, we contribute with a generalizable concept to support feedback requesters to create individualized crowd-feedback requests to support scalable design evaluation for everyone
    • 

    corecore