1,679 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Emotion capture based on body postures and movements

    Full text link
    In this paper we present a preliminary study for designing interactive systems that are sensible to human emotions based on the body movements. To do so, we first review the literature on the various approaches for defining and characterizing human emotions. After justifying the adopted characterization space for emotions, we then focus on the movement characteristics that must be captured by the system for being able to recognize the human emotions.Comment: 22 page

    Preface

    Get PDF

    Robotic Faces: Exploring Dynamical Patterns of Social Interaction between Humans and Robots

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics, 2015The purpose of this dissertation is two-fold: 1) to develop an empirically-based design for an interactive robotic face, and 2) to understand how dynamical aspects of social interaction may be leveraged to design better interactive technologies and/or further our understanding of social cognition. Understanding the role that dynamics plays in social cognition is a challenging problem. This is particularly true in studying cognition via human-robot interaction, which entails both the natural social cognition of the human and the “artificial intelligence” of the robot. Clearly, humans who are interacting with other humans (or even other mammals such as dogs) are cognizant of the social nature of the interaction – their behavior in those cases differs from that when interacting with inanimate objects such as tools. Humans (and many other animals) have some awareness of “social”, some sense of other agents. However, it is not clear how or why. Social interaction patterns vary across culture, context, and individual characteristics of the human interactor. These factors are subsumed into the larger interaction system, influencing the unfolding of the system over time (i.e. the dynamics). The overarching question is whether we can figure out how to utilize factors that influence the dynamics of the social interaction in order to imbue our interactive technologies (robots, clinical AI, decision support systems, etc.) with some "awareness of social", and potentially create more natural interaction paradigms for those technologies. In this work, we explore the above questions across a range of studies, including lab-based experiments, field observations, and placing autonomous, interactive robotic faces in public spaces. We also discuss future work, how this research relates to making sense of what a robot "sees", creating data-driven models of robot social behavior, and development of robotic face personalities

    Beyond shared signals: The role of downward gaze in the stereotypical representation of sad facial expressions

    Get PDF
    According to the influential shared signal hypothesis, perceived gaze direction influences the recognition of emotion from the face, for example, gaze averted sideways facilitates the recognition of sad expressions because both gaze and expression signal avoidance. Importantly, this approach assumes that gaze direction is an independent cue that influences emotion recognition. But could gaze direction also impact emotion recognition because it is part of the stereotypical representation of the expression itself? In Experiment 1, we measured gaze aversion in participants engaged in a facial expression posing task. In Experiment 2, we examined the use of gaze aversion when constructing facial expressions on a computerized avatar. Results from both experiments demonstrated that downward gaze plays a central role in the representation of sad expressions. In Experiment 3, we manipulated gaze direction in perceived facial expressions and found that sadness was the only expression yielding a recognition advantage for downward, but not sideways gaze. Finally, in Experiment 4 we independently manipulated gaze aversion and eyelid closure, thereby demonstrating that downward gaze enhances sadness recognition irrespective of eyelid position. Together, these findings indicate that (1) gaze and expression are not independent cues and (2) the specific type of averted gaze is critical. In consequence, several premises of the shared signal hypothesis may need revision. (PsycINFO Database Record (c) 2019 APA, all rights reserved

    CGAMES'2009

    Get PDF

    Digital Human Representations for Health Behavior Change: A Structured Literature Review

    Get PDF
    Organizations have increasingly begun using digital human representations (DHRs), such as avatars and embodied agents, to deliver health behavior change interventions (BCIs) that target modifiable risk factors in the smoking, nutrition, alcohol overconsumption, and physical inactivity (SNAP) domain. We conducted a structured literature review of 60 papers from the computing, health, and psychology literatures to investigate how DHRs’ social design affects whether BCIs succeed. Specifically, we analyzed how differences in social cues that DHRs use affect user psychology and how this can support or hinder different intervention functions. Building on established frameworks from the human-computer interaction and BCI literatures, we structure extant knowledge that can guide efforts to design future DHR-delivered BCIs. We conclude that we need more field studies to better understand the temporal dynamics and the mid-term and long-term effects of DHR social design on user perception and intervention outcomes
    • 

    corecore