16 research outputs found

    The Power of Computer-Mediated Communication Theories in Explaining the Effect of Chatbot Introduction on User Experience

    Get PDF
    Chatbots have increasingly penetrated our lives as their behavior growingly imitates a human interlocutor. This paper examines the effect of different methods of self-presentation of a chatbot on the end-user experience. An interlocutor in a computer-mediated communication (CMC) environment can either introduce itself as a chatbot, a human being, or choose not to identify itself. We conducted an experiment to compare these three methods in terms of end-user experience that comprises of social presence, perceived humanness, and service encounter satisfaction. Our data demonstrate that a chatbot that discloses its virtual identity is scored significantly lower for social presence and perceived humanness than other two choices of self-presentation. Key findings and the associated implications are discussed

    Resolving the Chatbot Disclosure Dilemma: Leveraging Selective Self-Presentation to Mitigate the Negative Effect of Chatbot Disclosure

    Get PDF
    Chatbots are increasingly able to pose as humans. However, this does not hold true if their identity is explicitly disclosed to users—a practice that will become a legal obligation for many service providers in the imminent future. Previous studies hint at a chatbot disclosure dilemma in that disclosing the non-human identity of chatbots comes at the cost of negative user responses. As these responses are commonly attributed to reduced trust in algorithms, this research examines how the detrimental impact of chatbot disclosure on trust can be buffered. Based on computer-mediated communication theory, the authors demonstrate that the chatbot disclosure dilemma can be resolved if disclosure is paired with selective presentation of the chatbot’s capabilities. Study results show that while merely disclosing (vs. not disclosing) chatbot identity does reduce trust, pairing chatbot disclosure with selectively presented information on the chatbot’s expertise or weaknesses is able to mitigate this negative effect

    Is Making Mistakes Human? On the Perception of Typing Errors in Chatbot Communication

    Get PDF
    The increasing application of Conversational Agents (CAs) changes the way customers and businesses interact during a service encounter. Research has shown that CA equipped with social cues (e.g., having a name, greeting users) stimulates the user to perceive the interaction as human-like, which can positively influence the overall experience. Specifically, social cues have shown to lead to increased customer satisfaction, perceived service quality, and trustworthiness in service encounters. However, many CAs are discontinued because of their limited conversational ability, which can lead to customer dissatisfaction. Nevertheless, making errors and mistakes can also be seen as a human characteristic (e.g., typing errors). Existing research on human-computer interfaces lacks in the area of CAs producing human-like errors and their perception in a service encounter situation. Therefore, we conducted a 2x2 online experiment with 228 participants on how CAs typing errors and CAs human-like behavior treatments influence user’s perception, including perceived service quality

    Supporting Inclusive Learning Using Chatbots? A Chatbot-Led Interview Study

    Get PDF
    Supporting student academic success has been one of the major goals for higher education. However, low teacher-to-student ratio makes it difficult for students to receive sufficient and personalized support that they might want to. The advancement of artificial intelligence (AI) and conversational agents, such as chatbots, has provided opportunities for assisting learning for different types of students. This research aims at investigating the opportunities and requirements of chatbots as an intelligent helper to facilitate equity in learning. We developed a chatbot as an experimental platform to investigate the design opportunities of using chatbots to support inclusive learning. Through a chatbot-led user study with 215 undergraduate students, we found chatbots provide the opportunity to support students who are disadvantaged, with diverse life environments, and with varied learning styles. This could be achieved through an accessible, interactive, and confidential way

    Ethics of Conversational User Interfaces

    Get PDF
    Building on the prior workshops on conversational user interfaces (CUIs) [2, 40], we tackle the topic of ethics of CUIs at CHI 2022. Though commercial CUI developments continue to rapidly advance, our scholarly dialogue on ethics of CUIs is underwhelming. The CUI community has implicitly been concerned with ethics, yet making it central to the growing body of work thus far has not been adequately done. Since ethics is a far-reaching topic, perspectives from philosophy, design, and engineering domains are integral to our CUI research community. For instance, philosophical traditions, e.g., deontology or virtue ethics, can guide ethical concepts that are relevant for CUIs, e.g., autonomy or trust. The practice of design through approaches like value sensitive design can inform how CUIs should be developed. Ethics comes into play with technical contributions, e.g., privacy-preserving data sharing between conversational systems. By considering such multidisciplinary angles, we come to a special topic of interest that ties together philosophy, design, and engineering: conversational disclosure, e.g., sharing personal information, transparency, e.g., as how to transparently convey relevant information in a conversational manner, and vulnerability of diverse user groups that should be taken into consideration

    Designing brand chatbots: The impact of chatbot’s personality on the user’s brand personality perception

    Get PDF
    Along with advancements in technologies, which include machine learning and artificial intelligence, chatbots are increasingly taking the place of employees that work as customer service agents and personal shoppers. Considering that the characteristics of employees can influence a consumer’s perception of brand personality (Aaker, 1997), this perception may also be affected by the chatbot’s personality. This paper aims to investigate the impact of a chatbot’s personality on a user’s perception of brand personality. Two brands, and their chatbots, are used as case studies. The empirical study comprises of two stages, in which the qualitative and the quantitative data are both gathered and analyzed. Firstly, an online survey was conducted to investigate the personalities of two existing brands and their respective chatbots. As a result, a gap in personality between one of the brands and its chatbot was identified. Next, two prototypes were built and then tested in the interview. One was the emulator of the current brand chatbot, and the other was a new chatbot designed to have a personality closer to the brand personality. The findings reveal that the chatbot’s personality may affect brand personality, even though the impact was smaller than expected because participants perceived that the two prototypes’ personalities were moderately close to the brand personality. Interestingly, interviewees revealed that the chatbot’s personality may have a greater influence if it is totally different from the brand personality. Based on the study findings, design considerations are suggested to help practitioners in designing brand chatbots
    corecore