19,379 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    Maps, agents and dialogue for exploring a virtual world

    Get PDF
    In previous years we have been involved in several projects in which users (or visitors) had to find their way in information-rich virtual environments. 'Information-rich' means that the users do not know beforehand what is available in the environment, where to go in the environment to find the information and, moreover, users or visitors do not necessarily know exactly what they are looking for. Information-rich means also that the information may change during time. A second visit to the same environment will require different behavior of the visitor in order for him or her to obtain similar information than was available during a previous visit. In this paper we report about two projects and discuss our attempts to generalize from the different approaches and application domains to obtain a library of methods and tools to design and implement intelligent agents that inhabit virtual environments and where the agents support the navigation of the user/visitor

    Using affective avatars and rich multimedia content for education of children with autism

    Get PDF
    Autism is a communication disorder that mandates early and continuous educational interventions on various levels like the everyday social, communication and reasoning skills. Computer-aided education has recently been considered as a likely intervention method for such cases, and therefore different systems have been proposed and developed worldwide. In more recent years, affective computing applications for the aforementioned interventions have also been proposed to shed light on this problem. In this paper, we examine the technological and educational needs of affective interventions for autistic persons. Enabling affective technologies are visited and a number of possible exploitation scenarios are illustrated. Emphasis is placed in covering the continuous and long term needs of autistic persons by unobtrusive and ubiquitous technologies with the engagement of an affective speaking avatar. A personalised prototype system facilitating these scenarios is described. In addition the feedback from educators for autistic persons is provided for the system in terms of its usefulness, efficiency and the envisaged reaction of the autistic persons, collected by means of an anonymous questionnaire. Results illustrate the clear potential of this effort in facilitating a very promising autism intervention

    FACETEQ interface demo for emotion expression in VR

    Get PDF
    © 2017 IEEE.Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year

    FACETEQ; A novel platform for measuring emotion in VR

    Get PDF
    FaceTeq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, FaceTeq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. FaceTeq project was founded with the aim to provide a human-centred additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed poster will exhibit the hardware and its functionality

    A randomised controlled test in virtual reality of the effects on paranoid thoughts of virtual humans’ facial animation and expression

    Get PDF
    Virtual reality (VR) is increasingly used in the study and treatment of paranoia. This is based on the finding that people who mistakenly perceive hostile intent from other people also perceive similar threat from virtual characters. However, there has been no study of the programming characteristics of virtual characters that may influence their interpretation. We set out to investigate how the animation and expressions of virtual humans may affect paranoia. In a two-by-two factor, between-groups, randomized design, 122 individuals with elevated paranoia rated their perceptions of virtual humans, set in an eye-tracking enabled VR lift scenario, that varied in facial animation (static or animated) and expression (neutral or positive). Both facial animation (group difference = 102.328 [51.783, 152.872], p < 0.001, ηp2= 0.125) and positive expressions (group difference = 53.016 [0.054, 105.979], p = 0.049, ηp2= 0.033) led to less triggering of paranoid thoughts about the virtual humans. Facial animation (group difference = 2.442 [− 4.161, − 0.724], p = 0.006, ηp2= 0.063) but not positive expressions (group difference = 0.344 [− 1.429, 2.110], p = 0.681, ηp2= 0.001) significantly increased the likelihood of neutral thoughts about the characters. Our study shows that the detailed programming of virtual humans can impact the occurrence of paranoid thoughts in VR. The programming of virtual humans needs careful consideration depending on the purpose of their use

    A model for providing emotion awareness and feedback using fuzzy logic in online learning

    Get PDF
    Monitoring users’ emotive states and using that information for providing feedback and scaffolding is crucial. In the learning context, emotions can be used to increase students’ attention as well as to improve memory and reasoning. In this context, tutors should be prepared to create affective learning situations and encourage collaborative knowledge construction as well as identify those students’ feelings which hinder learning process. In this paper, we propose a novel approach to label affective behavior in educational discourse based on fuzzy logic, which enables a human or virtual tutor to capture students’ emotions, make students aware of their own emotions, assess these emotions and provide appropriate affective feedback. To that end, we propose a fuzzy classifier that provides a priori qualitative assessment and fuzzy qualifiers bound to the amounts such as few, regular and many assigned by an affective dictionary to every word. The advantage of the statistical approach is to reduce the classical pollution problem of training and analyzing the scenario using the same dataset. Our approach has been tested in a real online learning environment and proved to have a very positive influence on students’ learning performance.Peer ReviewedPostprint (author's final draft
    • …
    corecore