1,853 research outputs found

    A virtual diary companion

    Get PDF
    Chatbots and embodied conversational agents show turn based conversation behaviour. In current research we almost always assume that each utterance of a human conversational partner should be followed by an intelligent and/or empathetic reaction of chatbot or embodied agent. They are assumed to be alert, trying to please the user. There are other applications which have not yet received much attention and which require a more patient or relaxed attitude, waiting for the right moment to provide feedback to the human partner. Being able and willing to listen is one of the conditions for being successful. In this paper we have some observations on listening behaviour research and introduce one of our applications, the virtual diary companion

    Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory

    Full text link
    Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents. However, this problem has not been studied in large-scale conversation generation so far. In this paper, we propose Emotional Chatting Machine (ECM) that can generate appropriate responses not only in content (relevant and grammatical) but also in emotion (emotionally consistent). To the best of our knowledge, this is the first work that addresses the emotion factor in large-scale conversation generation. ECM addresses the factor using three new mechanisms that respectively (1) models the high-level abstraction of emotion expressions by embedding emotion categories, (2) captures the change of implicit internal emotion states, and (3) uses explicit emotion expressions with an external emotion vocabulary. Experiments show that the proposed model can generate responses appropriate not only in content but also in emotion.Comment: Accepted in AAAI 201

    Emotional and Domain Concept Enhancements to Alicebot

    Get PDF
    Extensive research and development have been done in the area of human simulation and artificial intelligence and their related fields, such as common sense knowledge bases, chatterbots, natural language parsing, semantic analysis, synthetic actors, and cognitive sciences. This paper takes part in that extensive research by focusing on the improvement of human simulation in chatterbots, specifically in Alicebot, a prominent non-emotional pattern-matching chatbot. An emotion and personality model is added to Alicebot so that it can make decisions based on its emotions and personality. Alicebot is also augmented with the ability to determine what it likes or does not like based on its domain concept preferences. Finally, Alicebot will be able to generate its own text without using patterns. These improvements will allow Alicebot to better simulate responses like humans

    CHATBOT FOR KNOWLEDGE – BASED MUSEUM RECOMMENDER SYSTEM (CASE STUDY: MUSEUM IN JAKARTA)

    Get PDF
    Sistem pemberi rekomendasi yang umum digunakan untuk merekomendasi museum adalah content-based filtering dan collaborative filtering. Tetapi, sistem pemberi rekomendasi tersebut mengalami permasalahan seperti cold start dan data sparsity, karena beberapa museum masih memiliki rating dan feedback yang rendah. Untuk mengatasi masalah tersebut, knowledge-based recommender system dapat digunakan untuk memberikan rekomendasi museum berdasarkan preferensi pengguna, sehingga sistem tidak perlu menggunakan rating dan feedback. Preferensi pengguna bisa didapatkan menggunakan conversational recommender system dengan memanfaatkan percakapan dua arah antara pengguna dengan sistem. Chatbot merupakan salah satu bentuk conversational recommender system yang umum digunakan. Penelitian ini mengembangkan sebuah chatbot untuk merekomendasikan museum di Jakarta menggunakan knowledge-based recommender system. Sistem yang dikembangkan menggunakan Rasa framework untuk membangun chatbot yang mampu melakukan percakapan dengan pengguna. Knowledge graph dan k-nearest neighbor digunakan untuk merekomendasikan museum berdasarkan preferensi pengguna. Berdasarkan evaluasi yang telah dilakukan, sistem yang dikembangkan dapat memahami pesan pengguna dan memberikan rekomendasi museum berdasarkan preferensi pengguna. Tetapi, performa sistem masih dapat dikembangkan supaya sistem dapat diandalkan pada skenario dunia nyata

    Exploring the interaction between humans and an AI-driven chatbot

    Get PDF
    Abstract. Chatbots have become an omnipresent software that many services are using nowadays to provide easy and continuous support for users. Regardless of the domain in question, people are using chatbots to get quick access to information in a human-like manner. Still, chatbots are limited in terms of interactivity, providing facts, or solving elemental problems. Moreover, the lack of empathy that chatbots have is a drawback that limits them from providing the best outcome possible for the user. With that in mind, this thesis aims to find out how an emotionally aware chatbot would influence the interaction and engagement level of participants, starting from the hypothesis that “The awareness that the chatbot shows during the conversation impacts the engagement of participants”. The research method used was an experimental study approach because it helps with finding how the cause of the awareness of chatbots can affect the engagement level of participants. For that, a web application was developed that consisted of a chatbot driven by OpenAI. Before the participants started to interact with the chatbot, they were provided with information and instructions on how to adjust their cameras so their facial expressions could be analyzed properly in order to get the intended experience. A total number of 180 participants were recruited using the Prolific crowd-sourcing platform, from which 178 responses were used in analyzing the results. The participants were split into three study conditions, namely BASELINE, EMOJIONLY, and EMOJI-AND-CHAT which differed in the emotional awareness levels that the chatbot had. BASELINE study group interacted with a simple chatbot that was not aware of participants’ emotions at all. The EMOJI-ONLY study group discussed with a chatbot that during the interaction showed participants their emotions in real time with the use of emoji pictograms. In the last study group, EMOJI-ANDCHAT, besides showing the participants’ expressions through emojis, the chatbot also replied to the mood changes of the participants with messages that clearly stated that the chatbot noticed their facial expression changes. Each participant, regardless of the study group, had a conversation with the chatbot that lasted for a few minutes and started with the topic of their own chronic pain experiences. The chronic pain topic was used in order to trigger differences in facial expressions naturally. During a conversation of only a few minutes, the topic discussed needs to be of interest to the participant so that differences in facial expressions could occur. With that in mind, participants were recruited using Prolific’s option of selecting participants that deal with chronic pain. During the conversations participants’ facial expressions were analyzed and collected. Moreover, at the end of the interaction, the participants answered a questionnaire composed of a mix of 23 quantitative and 3 qualitative questions. The data collected showed that the emotional awareness that a chatbot is showing during a discussion impacts the level of engagement of participants. However, the results were not able to particularly point out if participants’ level of engagement is affected positively, and thus feeling more engaged, or is affected negatively, feeling less engaged than when interacting with a non-emotional aware chatbot. Participants showed both significant interests in the emotionally aware chatbots, as well as concerns, and identified possible issues and limitations. The chatbot used throughout this research was effective and succeeded to show the potential of such applications. Nevertheless, improving the way the chatbot reacts to changes in facial expressions needs further testing and development, as well as improving its privacy and security side so people would trust it more

    Usefulness, localizability, humanness, and language-benefit: additional evaluation criteria for natural language dialogue systems

    Get PDF
    Human–computer dialogue systems interact with human users using natural language. We used the ALICE/AIML chatbot architecture as a platform to develop a range of chatbots covering different languages, genres, text-types, and user-groups, to illustrate qualitative aspects of natural language dialogue system evaluation. We present some of the different evaluation techniques used in natural language dialogue systems, including black box and glass box, comparative, quantitative, and qualitative evaluation. Four aspects of NLP dialogue system evaluation are often overlooked: “usefulness” in terms of a user’s qualitative needs, “localizability” to new genres and languages, “humanness” or “naturalness” compared to human–human dialogues, and “language benefit” compared to alternative interfaces. We illustrated these aspects with respect to our work on machine-learnt chatbot dialogue systems; we believe these aspects are worthwhile in impressing potential new users and customers

    Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment

    Get PDF
    This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participants’ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participant’s input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions
    • …
    corecore