18,921 research outputs found

    Affective Expressions in Conversational Agents for Learning Environments: Effects of curiosity, humour, and expressive auditory gestures

    Get PDF
    Conversational agents -- systems that imitate natural language discourse -- are becoming an increasingly prevalent human-computer interface, being employed in various domains including healthcare, customer service, and education. In education, conversational agents, also known as pedagogical agents, can be used to encourage interaction; which is considered crucial for the learning process. Though pedagogical agents have been designed for learners of diverse age groups and subject matter, they retain the overarching goal of eliciting learning outcomes, which can be broken down into cognitive, skill-based, and affective outcomes. Motivation is a particularly important affective outcome, as it can influence what, when, and how we learn. Understanding, supporting, and designing for motivation is therefore of great importance for the advancement of learning technologies. This thesis investigates how pedagogical agents can promote motivation in learners. Prior research has explored various features of the design of pedagogical agents and what effects they have on learning outcomes, and suggests that agents using social cues can adapt the learning environment to enhance both affective and cognitive outcomes. One social cue that is suggested to be of importance for enhancing learner motivation is the expression or simulation of affect in the agent. Informed by research and theory across multiple domains, three affective expressions are investigated: curiosity, humour, and expressive auditory gestures -- each aimed at enhancing motivation by adapting the learning environment in different ways, i.e., eliciting contagion effects, creating a positive learning experience, and strengthening the learner-agent relationship, respectively. Three studies are presented in which each expression was implemented in a separate type of agent: physically-embodied, text-based, and voice-based; with all agents taking on the role of a companion or less knowledgeable peer to the learner. The overall focus is on how each expression can be displayed, what the effects are on perception of the agent, and how it influences behaviour and learning outcomes. The studies result in theoretical contributions that add to our understanding of conversational agent design for learning environments. The findings provide support for: the simulation of curiosity, the use of certain humour styles, and the addition of expressive auditory gestures, in enhancing motivation in learners interacting with conversational agents; as well as indicating a need for further exploration of these strategies in future work

    A virtual diary companion

    Get PDF
    Chatbots and embodied conversational agents show turn based conversation behaviour. In current research we almost always assume that each utterance of a human conversational partner should be followed by an intelligent and/or empathetic reaction of chatbot or embodied agent. They are assumed to be alert, trying to please the user. There are other applications which have not yet received much attention and which require a more patient or relaxed attitude, waiting for the right moment to provide feedback to the human partner. Being able and willing to listen is one of the conditions for being successful. In this paper we have some observations on listening behaviour research and introduce one of our applications, the virtual diary companion

    Video prototyping of dog-inspired non-verbal affective communication for an appearance constrained robot

    Get PDF
    Original article can be found at: http://ieeexplore.ieee.org “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This paper presents results from a video human-robot interaction (VHRI) study in which participants viewed a video in which an appearance-constrained Pioneer robot used dog-inspired affective cues to communicate affinity and relationship with its owner and a guest using proxemics, body movement and orientation and camera orientation. The findings suggest that even with the limited modalities for non-verbal expression offered by a Pioneer robot, which does not have a dog-like appearance, these cues were effective for non-verbal affective communication

    I Probe, Therefore I Am: Designing a Virtual Journalist with Human Emotions

    Get PDF
    By utilizing different communication channels, such as verbal language, gestures or facial expressions, virtually embodied interactive humans hold a unique potential to bridge the gap between human-computer interaction and actual interhuman communication. The use of virtual humans is consequently becoming increasingly popular in a wide range of areas where such a natural communication might be beneficial, including entertainment, education, mental health research and beyond. Behind this development lies a series of technological advances in a multitude of disciplines, most notably natural language processing, computer vision, and speech synthesis. In this paper we discuss a Virtual Human Journalist, a project employing a number of novel solutions from these disciplines with the goal to demonstrate their viability by producing a humanoid conversational agent capable of naturally eliciting and reacting to information from a human user. A set of qualitative and quantitative evaluation sessions demonstrated the technical feasibility of the system whilst uncovering a number of deficits in its capacity to engage users in a way that would be perceived as natural and emotionally engaging. We argue that naturalness should not always be seen as a desirable goal and suggest that deliberately suppressing the naturalness of virtual human interactions, such as by altering its personality cues, might in some cases yield more desirable results.Comment: eNTERFACE16 proceeding

    On combining the facial movements of a talking head

    Get PDF
    We present work on Obie, an embodied conversational agent framework. An embodied conversational agent, or talking head, consists of three main components. The graphical part consists of a face model and a facial muscle model. Besides the graphical part, we have implemented an emotion model and a mapping from emotions to facial expressions. The animation part of the framework focuses on the combination of different facial movements temporally. In this paper we propose a scheme of combining facial movements on a 3D talking head

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications
    • 

    corecore