72,574 research outputs found

    Do Embodied Conversational Agents Know When to Smile?

    Get PDF
    We survey the role of humor in particular domains of human-to-human interaction with the aim of seeing whether it is useful for embodied conversational agents to integrate humor capabilities in their models of intelligence, emotions and interaction (verbal and nonverbal) Therefore we first look at the current state of the art of research in embodied conversational agents, affective computing and verbal and nonverbal interaction. We adhere to the 'Computers Are Social Actors' paradigm to assume that human conversational partners of embodied conversational agents assign human properties to these agents, including humor appreciation

    Motivations, Classification and Model Trial of Conversational Agents for Insurance Companies

    Full text link
    Advances in artificial intelligence have renewed interest in conversational agents. So-called chatbots have reached maturity for industrial applications. German insurance companies are interested in improving their customer service and digitizing their business processes. In this work we investigate the potential use of conversational agents in insurance companies by determining which classes of agents are of interest to insurance companies, finding relevant use cases and requirements, and developing a prototype for an exemplary insurance scenario. Based on this approach, we derive key findings for conversational agent implementation in insurance companies.Comment: 12 pages, 6 figure, accepted for presentation at The International Conference on Agents and Artificial Intelligence 2019 (ICAART 2019

    Conversational Agents, Humorous Act Construction, and Social Intelligence

    Get PDF
    Humans use humour to ease communication problems in human-human interaction and \ud in a similar way humour can be used to solve communication problems that arise\ud with human-computer interaction. We discuss the role of embodied conversational\ud agents in human-computer interaction and we have observations on the generation\ud of humorous acts and on the appropriateness of displaying them by embodied\ud conversational agents in order to smoothen, when necessary, their interactions\ud with a human partner. The humorous acts we consider are generated spontaneously.\ud They are the product of an appraisal of the conversational situation and the\ud possibility to generate a humorous act from the elements that make up this\ud conversational situation, in particular the interaction history of the\ud conversational partners

    Smart Conversational Agents for Reminiscence

    Full text link
    In this paper we describe the requirements and early system design for a smart conversational agent that can assist older adults in the reminiscence process. The practice of reminiscence has well documented benefits for the mental, social and emotional well-being of older adults. However, the technology support, valuable in many different ways, is still limited in terms of need of co-located human presence, data collection capabilities, and ability to support sustained engagement, thus missing key opportunities to improve care practices, facilitate social interactions, and bring the reminiscence practice closer to those with less opportunities to engage in co-located sessions with a (trained) companion. We discuss conversational agents and cognitive services as the platform for building the next generation of reminiscence applications, and introduce the concept application of a smart reminiscence agent

    Contextual Language Model Adaptation for Conversational Agents

    Full text link
    Statistical language models (LM) play a key role in Automatic Speech Recognition (ASR) systems used by conversational agents. These ASR systems should provide a high accuracy under a variety of speaking styles, domains, vocabulary and argots. In this paper, we present a DNN-based method to adapt the LM to each user-agent interaction based on generalized contextual information, by predicting an optimal, context-dependent set of LM interpolation weights. We show that this framework for contextual adaptation provides accuracy improvements under different possible mixture LM partitions that are relevant for both (1) Goal-oriented conversational agents where it's natural to partition the data by the requested application and for (2) Non-goal oriented conversational agents where the data can be partitioned using topic labels that come from predictions of a topic classifier. We obtain a relative WER improvement of 3% with a 1-pass decoding strategy and 6% in a 2-pass decoding framework, over an unadapted model. We also show up to a 15% relative improvement in recognizing named entities which is of significant value for conversational ASR systems.Comment: Interspeech 2018 (accepted

    Controlling the Gaze of Conversational Agents

    Get PDF
    We report on a pilot experiment that investigated the effects of different eye gaze behaviours of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of human-like behaviour with\ud two other versions. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to be typed in by the users\ud of the systems. Despite this restriction we found that participants that conversed with the agent that behaved according to the human-like patterns appreciated the agent better than participants that conversed with the other agents. Conversations with the optimal version also\ud proceeded more efficiently. Participants needed less time to complete their task

    Bootstrapping Conversational Agents With Weak Supervision

    Full text link
    Many conversational agents in the market today follow a standard bot development framework which requires training intent classifiers to recognize user input. The need to create a proper set of training examples is often the bottleneck in the development process. In many occasions agent developers have access to historical chat logs that can provide a good quantity as well as coverage of training examples. However, the cost of labeling them with tens to hundreds of intents often prohibits taking full advantage of these chat logs. In this paper, we present a framework called \textit{search, label, and propagate} (SLP) for bootstrapping intents from existing chat logs using weak supervision. The framework reduces hours to days of labeling effort down to minutes of work by using a search engine to find examples, then relies on a data programming approach to automatically expand the labels. We report on a user study that shows positive user feedback for this new approach to build conversational agents, and demonstrates the effectiveness of using data programming for auto-labeling. While the system is developed for training conversational agents, the framework has broader application in significantly reducing labeling effort for training text classifiers.Comment: 6 pages, 3 figures, 1 table, Accepted for publication in IAAI 201
    corecore