5,558 research outputs found

    The influence of conversational agent embodiment and conversational relevance on socially desirable responding

    Get PDF
    Conversational agents (CAs) are becoming an increasingly common component in a wide range of information systems. A great deal of research to date has focused on enhancing traits that make CAs more humanlike. However, few studies have examined the influence such traits have on information disclosure. This research builds on self-disclosure, social desirability, and social presence theories to explain how CA anthropomorphism affects disclosure of personally sensitive information. Taken together, these theories suggest that as CAs become more humanlike, the social desirability of user responses will increase. In this study, we use a laboratory experiment to examine the influence of two elements of CA design—conversational relevance and embodiment—on the answers people give in response to sensitive and non-sensitive questions. We compare the responses given to various CAs to those given in a face-to-face interview and an online survey. The results show that for sensitive questions, CAs with better conversational abilities elicit more socially desirable responses from participants, with a less significant effect found for embodiment. These results suggest that for applications where eliciting honest answers to sensitive questions is important, CAs that are “better” in terms of humanlike realism may not be better for eliciting truthful responses to sensitive questions

    Conversational Agents, Conversational Relevance, and Disclosure: Comparing the Effectiveness of Chatbots and SVITs in Eliciting Sensitive Information

    Get PDF
    Conversational agents (CAs) in various forms are used in a variety of information systems. An abundance of prior research has focused on evaluating the various traits that make CAs effective. Most studies assume, however, that increasing the anthropomorphism of an agent will improve its performance. In a sensitive information disclosure task, that may not always be the case. We leverage self disclosure, social desirability, and social presence theories to predict how differing modes of conversational agents affect information disclosure. In this paper, we propose a laboratory experiment to compare how the mode of a given CA text based chatbot or voice based smart speaker paired with either high or low levels of conversational relevance, affects the disclosure of personally sensitive information. In addition to understanding influences on disclosure, we aim to break down the mechanisms through which CA design influences disclosure

    The effect of conversational agent skill on user behavior during deception

    Get PDF
    Conversational agents (CAs) are an integral component of many personal and business interactions. Many recent advancements in CA technology have attempted to make these interactions more natural and human-like. However, it is currently unclear how human-like traits in a CA impact the way users respond to questions from the CA. In some applications where CAs may be used, detecting deception is important. Design elements that make CA interactions more human-like may induce undesired strategic behaviors from human deceivers to mask their deception. To better understand this interaction, this research investigates the effect of conversational skill—that is, the ability of the CA to mimic human conversation—from CAs on behavioral indicators of deception. Our results show that cues of deception vary depending on CA conversational skill, and that increased conversational skill leads to users engaging in strategic behaviors that are detrimental to deception detection. This finding suggests that for applications in which it is desirable to detect when individuals are lying, the pursuit of more human-like interactions may be counter-productive

    On Conversational Agents in Information Systems Research: Analyzing the Past to Guide Future Work

    Get PDF
    Conversational agents (CA), i.e. software that interacts with its users through natural language, are becoming increasingly prevalent in everyday life as technological advances continue to significantly drive their capabilities. CA exhibit the potential to support and collaborate with humans in a multitude of tasks and can be used for innovation and automation across a variety of business functions, such as customer service or marketing and sales. Parallel to the increasing popularity in practice, IS researchers have engaged in studying a variety of aspects related to CA in the last few years, applying different research methods and producing different types of theories. In this paper, we review 36studies to assess the status quo of CA research in IS, identify gaps regarding both the studied aspects as well as applied methods and theoretical approaches, and propose directions for future work in this research area

    Validity of Chatbot Use for Mental Health Assessment: Experimental Study

    Get PDF
    BACKGROUND: Mental disorders in adolescence and young adulthood are major public health concerns. Digital tools such as text-based conversational agents (ie, chatbots) are a promising technology for facilitating mental health assessment. However, the human-like interaction style of chatbots may induce potential biases, such as socially desirable responding (SDR), and may require further effort to complete assessments. OBJECTIVE: This study aimed to investigate the convergent and discriminant validity of chatbots for mental health assessments, the effect of assessment mode on SDR, and the effort required by participants for assessments using chatbots compared with established modes. METHODS: In a counterbalanced within-subject design, we assessed 2 different constructs—psychological distress (Kessler Psychological Distress Scale and Brief Symptom Inventory-18) and problematic alcohol use (Alcohol Use Disorders Identification Test-3)—in 3 modes (chatbot, paper-and-pencil, and web-based), and examined convergent and discriminant validity. In addition, we investigated the effect of mode on SDR, controlling for perceived sensitivity of items and individuals’ tendency to respond in a socially desirable way, and we also assessed the perceived social presence of modes. Including a between-subject condition, we further investigated whether SDR is increased in chatbot assessments when applied in a self-report setting versus when human interaction may be expected. Finally, the effort (ie, complexity, difficulty, burden, and time) required to complete the assessments was investigated. RESULTS: A total of 146 young adults (mean age 24, SD 6.42 years; n=67, 45.9% female) were recruited from a research panel for laboratory experiments. The results revealed high positive correlations (all P<.001) of measures of the same construct across different modes, indicating the convergent validity of chatbot assessments. Furthermore, there were no correlations between the distinct constructs, indicating discriminant validity. Moreover, there were no differences in SDR between modes and whether human interaction was expected, although the perceived social presence of the chatbot mode was higher than that of the established modes (P<.001). Finally, greater effort (all P<.05) and more time were needed to complete chatbot assessments than for completing the established modes (P<.001). CONCLUSIONS: Our findings suggest that chatbots may yield valid results. Furthermore, an understanding of chatbot design trade-offs in terms of potential strengths (ie, increased social presence) and limitations (ie, increased effort) when assessing mental health were established

    Exploring the Impact of Inclusive PCA Design on Perceived Competence, Trust and Diversity

    Get PDF
    Pedagogical Conversational Agents (PCAs) conquer academia as learning facilitators. Due to user heterogeneity and need for more inclusion in education, inclusive PCA design becomes relevant, but still remains understudied. Our contribution thus investigates the effects of inclusive PCA design on competence, trust, and diversity awareness in a between-subjects experiment with two contrastingly designed prototypes (inclusive and non-inclusive PCA) tested among 106 German university students. As expected by social desirability, the results show that 81.5% of the probands consider an inclusive design important. However, at the same time, the inclusive chatbot is highly significantly rated as less competent. In contrast, we did not measure a significant effect regarding trust, but a highly significant, strongly positive effect on diversity awareness. We interpret these results with the help of the qualitative information provided by the respondents and discuss arising implications for inclusive HCI design

    Promoting Sustainable Mobility Beliefs with Persuasive and Anthropomorphic Design: Insights from an Experiment with a Conversational Agent

    Get PDF
    Sustainable mobility behavior is increasingly relevant due to the vast environmental impact of current transportation systems. With the growing variety of transportation modes, individual decisions for or against specific mobility options become more and more important and salient beliefs regarding the environmental impact of different modes influence this decision process. While information systems have been recognized for their potential to shape individual beliefs and behavior, design-oriented studies that explore their impact, in particular on environmental beliefs, remain scarce. In this study, we contribute to closing this research gap by designing and evaluating a new type of artifact, a persuasive and human-like conversational agent, in a 2x2 experiment with 225 participants. Drawing on the Theory of Planned Behavior and Social Response Theory, we find empirical support for the influence of persuasive design elements on individual environmental beliefs and discover that anthropomorphic design can contribute to increasing the persuasiveness of artifacts

    Comparing How a Chatbot References User Utterances from Previous Chatting Sessions: An Investigation of Users' Privacy Concerns and Perceptions

    Full text link
    Chatbots are capable of remembering and referencing previous conversations, but does this enhance user engagement or infringe on privacy? To explore this trade-off, we investigated the format of how a chatbot references previous conversations with a user and its effects on a user's perceptions and privacy concerns. In a three-week longitudinal between-subjects study, 169 participants talked about their dental flossing habits to a chatbot that either, (1-None): did not explicitly reference previous user utterances, (2-Verbatim): referenced previous utterances verbatim, or (3-Paraphrase): used paraphrases to reference previous utterances. Participants perceived Verbatim and Paraphrase chatbots as more intelligent and engaging. However, the Verbatim chatbot also raised privacy concerns with participants. To gain insights as to why people prefer certain conditions or had privacy concerns, we conducted semi-structured interviews with 15 participants. We discuss implications from our findings that can help designers choose an appropriate format to reference previous user utterances and inform in the design of longitudinal dialogue scripting.Comment: 10 pages, 3 figures, to be published in Proceedings of the 11th International Conference on Human-Agent Interaction (ACM HAI'23

    “Look Closer” Anthropomorphic Design and Perception of Anthropomorphism in Conversational Agent Research

    Get PDF
    Conversation agents have been attracting increased attention in IS research and increased adoption in practice. They provide an AI-driven conversation-like interface and tap into the anthropomorphism bias of its users. There has been extensive research on improving this effect for over a decade since increased anthropomorphism leads to increased service satisfaction, trust, and other effects on the user. This work examines the current state of research regarding anthropomorphism and anthropomorphic design to guide future research. It utilizes a modified structured literature analysis to extract and classify the examined constructs and their relationships in the hypotheses of current literature. We provide an overview of current research, highlighting focus areas. Based on our results, we formulate several open research questions and provide the IS community with directions for future research
    • …
    corecore