1,389 research outputs found

    Evaluating the Sensitivity to Virtual Characters Facial Asymmetry in Emotion Synthesis

    Get PDF
    The use of expressive Virtual Characters is an effective complementary means of communication for social networks offering multi-user 3D-chatting environment. In such context the facial expression channel offers a rich medium to translate the on-going emotions conveyed by the text-based exchanges. However, until recently, only purely symmetric facial expressions have been considered for that purpose. In this article we examine human sensitivity to facial asymmetry in the expression of both basic and complex emotions. The rationale for introducing asymmetry in the display of facial expressions stems from two well established observations in cognitive neuroscience: first that the expression of basic emotions generally displays a small asymmetry, second that more complex emotions such as ambivalent feeling may reflect in the partial display of different, potentially opposite, emotions on each side of the face. A frequent occurrence of this second case results from the conflict between the truly felt emotion and the one that should be displayed due to social conventions. Our main hypothesis is that a much larger expressive and emotional space can only be automatically synthesized by means of facial asymmetry when modelling emotions with a general Valence-Arousal-Dominance dimensional approach. Besides, we want also to explore the general human sensitivity to the introduction of a small degree of asymmetry into the expression of basic emotions. We conducted an experiment by presenting 64 pairs of static facial expressions, one symmetric and one asymmetric, illustrating eight emotions (three basic and five complex ones) alternatively for a male and a female character. Each emotion was presented four times by swapping the symmetric and asymmetric positions and by mirroring the asymmetrical expression. Participants were asked to grade, on a continuous scale, the correctness of each facial expression with respect to a short definition. Results confirm the potential of introducing facial asymmetry for a subset of the complex emotions. Guidelines are proposed for designers of embodied conversational agent and emotionally-reflective avatars

    Cultural dialects of real and synthetic emotional facial expressions

    Get PDF
    In this article we discuss the aspects of designing facial expressions for virtual humans (VHs) with a specific culture. First we explore the notion of cultures and its relevance for applications with a VH. Then we give a general scheme of designing emotional facial expressions, and identify the stages where a human is involved, either as a real person with some specific role, or as a VH displaying facial expressions. We discuss how the display and the emotional meaning of facial expressions may be measured in objective ways, and how the culture of displayers and the judges may influence the process of analyzing human facial expressions and evaluating synthesized ones. We review psychological experiments on cross-cultural perception of emotional facial expressions. By identifying the culturally critical issues of data collection and interpretation with both real and VHs, we aim at providing a methodological reference and inspiration for further research

    Affective Brain-Computer Interfaces

    Get PDF

    An architecture for emotional facial expressions as social signals

    Get PDF

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    A meta-analysis of the uncanny valley's independent and dependent variables

    Get PDF
    The uncanny valley (UV) effect is a negative affective reaction to human-looking artificial entities. It hinders comfortable, trust-based interactions with android robots and virtual characters. Despite extensive research, a consensus has not formed on its theoretical basis or methodologies. We conducted a meta-analysis to assess operationalizations of human likeness (independent variable) and the UV effect (dependent variable). Of 468 studies, 72 met the inclusion criteria. These studies employed 10 different stimulus creation techniques, 39 affect measures, and 14 indirect measures. Based on 247 effect sizes, a three-level meta-analysis model revealed the UV effect had a large effect size, Hedges’ g = 1.01 [0.80, 1.22]. A mixed-effects meta-regression model with creation technique as the moderator variable revealed face distortion produced the largest effect size, g = 1.46 [0.69, 2.24], followed by distinct entities, g = 1.20 [1.02, 1.38], realism render, g = 0.99 [0.62, 1.36], and morphing, g = 0.94 [0.64, 1.24]. Affective indices producing the largest effects were threatening, likable, aesthetics, familiarity, and eeriness, and indirect measures were dislike frequency, categorization reaction time, like frequency, avoidance, and viewing duration. This meta-analysis—the first on the UV effect—provides a methodological foundation and design principles for future research

    A Meta-analysis of the Uncanny Valley's Independent and Dependent Variables

    Get PDF
    The uncanny valley (UV) effect is a negative affective reaction to human-looking artificial entities. It hinders comfortable, trust-based interactions with android robots and virtual characters. Despite extensive research, a consensus has not formed on its theoretical basis or methodologies. We conducted a meta-analysis to assess operationalizations of human likeness (independent variable) and the UV effect (dependent variable). Of 468 studies, 72 met the inclusion criteria. These studies employed 10 different stimulus creation techniques, 39 affect measures, and 14 indirect measures. Based on 247 effect sizes, a three-level meta-analysis model revealed the UV effect had a large effect size, Hedges’ g = 1.01 [0.80, 1.22]. A mixed-effects meta-regression model with creation technique as the moderator variable revealed face distortion produced the largest effect size, g = 1.46 [0.69, 2.24], followed by distinct entities, g = 1.20 [1.02, 1.38], realism render, g = 0.99 [0.62, 1.36], and morphing, g = 0.94 [0.64, 1.24]. Affective indices producing the largest effects were threatening, likable, aesthetics, familiarity, and eeriness, and indirect measures were dislike frequency, categorization reaction time, like frequency, avoidance, and viewing duration. This meta-analysis—the first on the UV effect—provides a methodological foundation and design principles for future research

    Towards Computer-Assisted Regulation of Emotions

    Get PDF
    Tunteet ovat keskeinen ja erottamaton osa ihmisen toimintaa, ajattelua ja yksilöiden vÀlistÀ vuorovaikutusta. Tunteet luovat perustan mielekkÀÀlle, toimivalle ja tehokkaalle toiminnalle. Joskus tunteiden sÀvy tai voimakkuus voi kuitenkin olla epÀedullinen henkilön tavoitteiden ja hyvinvoinnin kannalta. TÀllöin taidokas tunteiden sÀÀtely voi auttaa saavuttamaan terveen ja menestyksellisen elÀmÀn. VÀitöstyön tavoitteena oli muodostaa perusta tulevaisuuden tietokoneille, jotka auttavat sÀÀtelemÀÀn tunteita. Tietokoneiden tunneÀlyÀ on toistaiseksi kehitetty kahdella alueella: ihmisen tunnereaktioiden mittaamisessa ja tietokoneen tuottamissa tunneilmaisuissa. ViimeisimmÀt teknologiat antavat tietokoneille jo mahdollisuuden tunnistaa ja jÀljitellÀ ihmisen tunneilmaisuja hyvinkin tarkasti. VÀitöstyössÀ toimistotuoliin asennetuilla paineantureilla kyettiin huomaamattomasti havaitsemaan muutoksia kehon liikkeissÀ: osallistujat nojautuivat kohti heille esitettyjÀ tietokonehahmoja. Tietokonehahmojen esittÀmÀt kasvonilmeet ja kehollinen etÀisyys vaikuttivat merkittÀvÀsti osallistujien tunne- ja tarkkaavaisuuskokemuksiin sekÀ sydÀmen, ihon hikirauhasten ja kasvon lihasten toimintaan. Tulokset osoittavat ettÀ keinotekoiset tunneilmaisut voivat olla tehokkaita henkilön kokemusten ja kehon toiminnan sÀÀtelyssÀ. VÀitöstyössÀ laadittiin lopulta vuorovaikutteinen asetelma, jossa tunneilmaisujen automaattinen tarkkailu liitettiin tietokoneen tuottamien sosiaalisten ilmaisujen ohjaamiseen. Osallistujat pystyivÀt sÀÀtelemÀÀn vÀlittömiÀ fysiologisia reaktioitaan ja tunnekokemuksiaan esittÀmÀllÀ tahdonalaisia kasvonilmeitÀ (mm. ikÀÀn kuin hymyilemÀllÀ) heitÀ lÀhestyvÀlle tietokonehahmolle. VÀitöstyön tuloksia voidaan hyödyntÀÀ laajasti, muun muassa uudenlaisten, ihmisen luonnollisia vuorovaikutustapoja paremmin tukevien tietokoneiden suunnittelussa.Emotions are intimately connected with our lives. They are essential in motivating behaviour, for reasoning effectively, and in facilitating interactions with other people. Consequently, the ability to regulate the tone and intensity of emotions is important for leading a life of success and well-being. Intelligent computer perception of human emotions and effective expression of virtual emotions provide a basis for assisting emotion regulation with technology. State-of-the-art technologies already allow computers to recognize and imitate human social and emotional cues accurately and in great detail. For example, in the present work a regular looking office chair was used to covertly measure human body movement responses to artifical expressions of proximity and facial cues. In general, such artificial cues from visual agents were found to significantly affect heart, sweat gland, and facial muscle activities, as well as subjective experiences of emotion and attention. The perceptual and expressive capabilities were combined in a setup where a person regulated her or his more spontaneous reactions by either smiling or frowning voluntarily to a virtual humanlike character. These results highlight the potential of future emotion-sensitive technologies for creating supportive and even healthy interactions between humans and computers
    • 

    corecore