44 research outputs found

    AI/Human Augmentation: A Study on Chatbot – Human Agent Handovers

    Get PDF
    Author's accepted manuscriptThe combination of chatbots with live chats supported by human agents creates a new type of man-machine coordination problem. Prior research on chatbot interactions has focused mostly on the interaction between end users and chatbots and there is limited research on the interaction between human chat agents and chatbots. This study aims to fill this gap contributing to the body of research on coordinating humans and artificial conversational agents by addressing the Research Question: How can the handover between chatbots and chat employees be handled to ensure good user experience? The study aims to contribute to the emerging discipline of Human-Centered AI providing insights on how to create AI-enabled systems that amplify and augment human abilities while preserving human control by identifying key aspects that need to be considered when integrating chatbots in live chat workflows.acceptedVersio

    Trust in Digital Humans

    Get PDF
    With technology advances, the interaction between organisations and consumers is evolving gradually from ‘human-to-human’ to ‘human-to-machine’, due, in part, to improvements in Artificial Intelligence (AI). One such technology, the AI-enabled digital human is unique in its combining of technology and humanness and is being adopted by firms to support customer services and other business processes. However, a number of questions arise with this new way of interacting, among which is whether people will trust a digital human in the same way that they trust people. To address this question, this study draws on technology trust theory, and examines the roles of social presence, anthropomorphism, and privacy to understand trust and people’s readiness to engage with digital humans. The results aim to benefit organisations wanting to implement AI-enabled digital-humans in the workplace

    Chatbot Quality Assurance Using RPA

    Get PDF
    Chatbots are becoming mainstream consumer engagement tools, and well-developed chatbots are already transforming user experience and personalization. Chatbot Quality Assurance (QA) is an essential part of the development and deployment process, regardless of whether it’s conducted by one entity (business) or two (developers and business), to ensure ideal results. Robotic Process Automation (RPA) can be explored as a potential facilitator to improve, augment, streamline, or optimize chatbot QA. RPA is ideally suited for tasks that can be clearly defined (rule-based) and are repeating in nature. This limits its ability to become an all-encompassing technology for chatbot QA testing, but it can still be useful in replacing part of the manual QA testing of chatbots. Chatbot QA is a complex domain in its own right and has its own challenges, including the lack of streamlined/standardized testing protocols and quality measures, though traits like intent recognition, responsiveness, conversational flow, etc., are usually tested, especially at the end-user testing phase. RPA can be useful in certain areas of chatbot QA, including its ability to increase the sample size for training and testing datasets, generating input variations, splitting testing/conversation data sets, testing for typo resiliency, etc. The general rule is that the easier a testing process is to clearly define and set rules for, the better it's a candidate for RPA-based testing. This naturally increases the lean towards technical testing and makes it moderately unfeasible as an end-user testing alternative. It has the potential to optimize chatbot QA in conjunction with AI and ML testing tools

    Resolving the Chatbot Disclosure Dilemma: Leveraging Selective Self-Presentation to Mitigate the Negative Effect of Chatbot Disclosure

    Get PDF
    Chatbots are increasingly able to pose as humans. However, this does not hold true if their identity is explicitly disclosed to users—a practice that will become a legal obligation for many service providers in the imminent future. Previous studies hint at a chatbot disclosure dilemma in that disclosing the non-human identity of chatbots comes at the cost of negative user responses. As these responses are commonly attributed to reduced trust in algorithms, this research examines how the detrimental impact of chatbot disclosure on trust can be buffered. Based on computer-mediated communication theory, the authors demonstrate that the chatbot disclosure dilemma can be resolved if disclosure is paired with selective presentation of the chatbot’s capabilities. Study results show that while merely disclosing (vs. not disclosing) chatbot identity does reduce trust, pairing chatbot disclosure with selectively presented information on the chatbot’s expertise or weaknesses is able to mitigate this negative effect

    Artificial Intelligence Powered Chatbot for Business

    Get PDF
    Text has become an essential interaction manner between people. The use of chatbots improved quickly in business area including marketing, customer service and e-commerce. Users value chatbots because they are fast, intuitive and convenient. This paper discussed about the artificial intelligence technology that used to develop and implement chatbots which can the organization used to benefit in their businesses. A chatbot is a computer program that can interact with a human by using natural language. The main three areas in business that using chatbot the most are marketing, customer service and e-commerce fields. The roles of chatbot in mentioned areas has been discussed in this paper. AI powered chatbots transform business by reducing costs, increasing revenue and enhancing the customer experience. The benefits and limitation of chatbots have been also discussed in this paper

    Exploring the Impact of Inclusive PCA Design on Perceived Competence, Trust and Diversity

    Get PDF
    Pedagogical Conversational Agents (PCAs) conquer academia as learning facilitators. Due to user heterogeneity and need for more inclusion in education, inclusive PCA design becomes relevant, but still remains understudied. Our contribution thus investigates the effects of inclusive PCA design on competence, trust, and diversity awareness in a between-subjects experiment with two contrastingly designed prototypes (inclusive and non-inclusive PCA) tested among 106 German university students. As expected by social desirability, the results show that 81.5% of the probands consider an inclusive design important. However, at the same time, the inclusive chatbot is highly significantly rated as less competent. In contrast, we did not measure a significant effect regarding trust, but a highly significant, strongly positive effect on diversity awareness. We interpret these results with the help of the qualitative information provided by the respondents and discuss arising implications for inclusive HCI design

    Health-Seeking Behaviour and the use of Artificial Intelligence-based Healthcare Chatbots among Indian Patients

    Get PDF
    Artificial Intelligence (AI) based healthcare chatbots can scale up healthcare services in terms of diagnosis and treatment. However, the use of such chatbots may differ among the Indian population. This study investigates the influence of health-seeking behaviour and the availability of traditional, complementary and alternative medicine systems on healthcare chatbots. A quantitative study using a survey technique collects data from the Indian population. Items measuring the awareness of chatbot’s attributes and services, trust in the chatbots, health-seeking behaviour, traditional, complementary and alternative medicine, and use of chatbots are adapted from previous scales. A convenience sample is used to collect the data from the urban population. 397 samples were fetched, and statistical analysis was done. Awareness of the chatbot’s attributes and services impacted the trust in the chatbots. Health-seeking behaviour positively impacted the use of chatbots and enhanced the impact of trust of a chatbot on the use of a chatbot. Traditional, complementary and alternative medicine was not included in the chatbot, which negatively impacted the use of chatbots. At the same time, it dampened the impact of trust in chatbots on the use of chatbots. The study was limited to the urban population and a convenience sampling because of the need to use the Internet and a smart device for accessing the chatbots. The results of the study need to be used cautiously. The results can be inferred from the relationships’ existence rather than the impact’s magnitude. The study’s outcome encourages the availability of chatbots due to the health-seeking behaviour of the Indian urban population. The study also highlights the need for creating intelligent agents with knowledge of Traditional, complementary and alternative medicine. The study contributes to the knowledge of using chatbots in the Indian context. When earlier studies focus mainly on the chatbot features or user characteristics in the intention studies, this study looks at the healthcare system and the services unique to India
    corecore