25 research outputs found

    Where is the Bot in our Team? Toward a Taxonomy of Design Option Combinations for Conversational Agents in Collaborative Work

    Get PDF
    With rapid progress in machine learning, language technologies and artificial intelligence, conversational agents (CAs) gain rising attention in research and practice as potential non-human teammates, facilitators or experts in collaborative work. However, designers of CAs in collaboration still struggle with a lack of comprehensive understanding of the vast variety of design options in the dynamic field. We address this gap with a taxonomy to help researchers and designers understand the design space and the interrelations of different design options and recognize useful design option combinations for their CAs. We present the iterative development of a taxonomy for the design of CAs grounded in state of the art literature and validated with domain experts. We identify recurring design option combinations and white spots from the classified objects that will inform further research and development efforts

    Automating Crisis Communication in Public Institutions – Towards Ethical Conversational Agents That Support Trust Management

    Get PDF
    To improve disaster relief and crisis communication, public institutions (PIs) such as administrations rely on automation and technology. As one example, the use of conversational agents (CAs) has increased. To ensure that information and advisories are taken up seriously, it is important for PIs to be perceived as a trusted source and a trustworthy point of contact. In this study, we therefore examine how CAs can be applied by PIs to, on the one hand, automate their crisis communication and, on the other hand, maintain or even increase their perceived trustworthiness. We developed two CAs – one equipped with ethical cues in order to be perceived more trustworthy and one without such cues – and started to conduct an online experiment to evaluate the effects. Our first results indicate that applying ethical principles such as fairness, transparency, security and accountability have a positive effect on the perceived trustworthiness of the CA

    Introducing conversational explanations as a novel response strategy to data breach incidents in digital commerce

    Get PDF
    In order to individualize and personalize digital services, an increasing number of e-commerce providers are exploiting abundant amounts of customer information. Alongside these positive effects, an inherent risk of compromise of customer information arises, resulting in data breaches. Compelled by regulations, companies are obliged to notify their customers. Previous literature indicates that different data breach response strategies can mitigate the negative effects of these security incidents. Drawing on data breach and conversational agent (CA) research, we theorize that the manner in which a data breach is communicated is equally relevant. We test our developed hypotheses in an online experiment (n=89). Our results show that explaining a data breach increases customer satisfaction. Simultaneously, we reveal that CAs lend themselves as a tool to positively influence this degree of explanation. Our work provides novel insights into the centrality of explanation in a data breach response and their positive correlation with CAs

    Knowledge Transfer between Humans and Conversational Agents: A Review, Organizing Framework, and Future Directions

    Get PDF
    Conversational agents (CAs) that use natural language to interact with humans are becoming ubiquitous in our daily lives. For CAs to perform effectively, knowledge transfer between human users and CAs is vital to complete tasks and to build common understanding with humans. While such knowledge transfer is important, relatively less research attention has been paid to it. Overall, we lack a systematic overview of how knowledge transfer can be facilitated between humans and CAs. Motivated thus, this article presents a literature review of empirical IS, HCI and Communications studies on the knowledge transfer between humans and CAs. We analyzed papers on this topic, synthesized the studies based on the antecedents, directions, processes, and outcomes of knowledge transfer. We contribute by providing a systematic understanding of research on knowledge transfer in human-CA interactions, proposing an organizing framework, identifying gaps in prior work, and outlining key future research directions

    Are you for real? A Negotiation Bot for Electronic Negotiations

    Get PDF
    Bots are autonomous software agents able to imitate human behaviour which makes them interesting for interactive processes such as electronic negotiations. In electronic negotiation training, humans often negotiate with negotiation software agents which respond quickly to the offers of the human participants. Currently, these agents are limited in their communication behaviour and thus restrain the effectiveness of electronic negotiation training. For an effective training, coherent and transparent communication processes are desirable, in which the agent takes up the human’s arguments and provides their own reasonable arguments. Following the design science research methodology, we derive requirements and a meta-design for a negotiation bot to improve communication quality, and finally present our newly developed negotiation bot. The evaluation comparing the bot with an existing agent shows that although the bot sometimes provides unsuitable arguments, the bot imitates human behaviour well and ensures coherent communication processes. The bot can thus improve communication training for electronic negotiations

    Mechanisms of Common Ground in Human-Agent Interaction: A Systematic Review of Conversational Agent Research

    Get PDF
    Human-agent interaction is increasingly influencing our personal and work lives through the proliferation of conversational agents in various domains. As such, these agents combine intuitive natural language interactions by also delivering personalization through artificial intelligence capabilities. However, research on CAs as well as practical failures indicate that CA interaction oftentimes fails miserably. To reduce these failures, this paper introduces the concept of building common ground for more successful human-agent interactions. Based on a systematic review our analysis reveals five mechanisms for achieving common ground: (1) Embodiment, (2) Social Features, (3) Joint Action, (4) Knowledge Base, and (5) Mental Model of Conversational Agents. On this basis, we offer insights into grounding mechanisms and highlight the potentials when considering common ground in different human-agent interaction processes. Consequently, we secure further understanding and deeper insights of possible mechanisms of common ground in human-agent interaction in the future

    How Can Organizations Design Purposeful Human-AI Interactions: A Practical Perspective From Existing Use Cases and Interviews

    Get PDF
    Artificial intelligence (AI) currently makes a tangible impact in many industries and humans’ daily lives. With humans interacting with AI agents more regularly, there is a need to examine human-AI interactions to design them purposefully. Thus, we draw on existing AI use cases and perceptions of human-AI interactions from 25 interviews with practitioners to elaborate on these interactions. From this practical lens on existing human-AI interactions, we introduce nine characteristic dimensions to describe human-AI interactions and distinguish five interaction types according to AI agents’ characteristics in the human-AI interaction. Besides, we provide initial design guidelines to stimulate both research and practice in creating purposeful designs for human-AI interactions

    Implementing an Intelligent Collaborative Agent as Teammate in Collaborative Writing: toward a Synergy of Humans and AI

    Get PDF
    This paper aims at implementing a hybrid form of group work through the incorporation of an intelligent collaborative agent into a Collaborative Writing process. With that it contributes to the overall research gap establishing acceptance of AI towards complementary hybrid work. To approach this aim, we follow a Design Science Research process. We identify requirements for the agent to be considered a teammate based on expert interviews in the light of Social Response Theory and the concept of the Uncanny Valley. Next, we derive design principles for the implementation of an agent as teammate from the collected requirements. For the evaluation of the design principles and the human teammates’ perception of the agent, we instantiate a Collaborative Writing process via a web-application incorporating the agent. The evaluation reveals the partly successful implementation of the developed design principles. Additionally, the results show the potential of hybrid collaboration teams accepting non-human teammates

    From Tools to Teammates: Conceptualizing Humans’ Perception of Machines as Teammates with a Systematic Literature Review

    Get PDF
    The accelerating capabilities of systems brought about by advances in Artificial Intelligence challenge the traditional notion of systems as tools. Systems’ increasingly agentic and collaborative character offers the potential for a new user-system interaction paradigm: Teaming replaces unidirectional system use. Yet, extant literature addresses the prerequisites for this new interaction paradigm inconsistently, often not even considering the foundations established in human teaming literature. To address this, this study utilizes a systematic literature review to conceptualize the drivers of the perception of systems as teammates instead of tools. Hereby, it integrates insights from the dispersed and interdisciplinary field of human-machine teaming with established human teaming principles. The creation of a team setting and a social entity, as well as specific configurations of the machine teammate’s collaborative behaviors, are identified as main drivers of the formation of impactful human-machine teams
    corecore