7 research outputs found

    A Review of Dynamic Datasets for Facial Expression Research

    Get PDF
    Temporal dynamics have been increasingly recognized as an important component of facial expressions. With the need for appropriate stimuli in research and application, a range of databases of dynamic facial stimuli has been developed. The present article reviews the existing corpora and describes the key dimensions and properties of the available sets. This includes a discussion of conceptual features in terms of thematic issues in dataset construction as well as practical features which are of applied interest to stimulus usage. To identify the most influential sets, we further examine their citation rates and usage frequencies in existing studies. General limitations and implications for emotion research are noted and future directions for stimulus generation are outlined

    Interpreting human and avatar facial expressions

    Get PDF
    Abstract. This paper investigates the impact of contradictory emotional content on people's ability to identify the emotion expressed on avatar faces as compared to human faces. Participants saw emotional faces (human or avatar) coupled with emotional texts. The face and text could either display the same or different emotions. Participants were asked to identify the emotion on the face and in the text. While they correctly identified the emotion on human faces more often than on avatar faces, this difference was mostly due to the neutral avatar face. People were no better at identifying a facial expression when emotional information coming from two sources was the same than when it was different, regardless of whether the facial expression was displayed on a human face or on an avatar face. Finally, people were more sensitive to context when trying to identify the emotion in the accompanying text

    Human recognition of basic emotions from posed and animated dynamic facial expressions

    Get PDF
    Facial expressions are crucial for social communication, especially because they make it possible to express and perceive unspoken emotional and mental states. For example, neurodevelopmental disorders with social communication deficits, such as Asperger Syndrome (AS), often involve difficulties in interpreting emotional states from the facial expressions of others. Rather little is known of the role of dynamics in recognizing emotions from faces. Better recognition of dynamic rather than static facial expressions of six basic emotions has been reported with animated faces; however, this result hasn't been confirmed reliably with real human faces. This thesis evaluates the role of dynamics in recognizing basic expressions from animated and human faces. With human faces, the further interaction between dynamics and the effect of removing fine details by low-pass filtering (blurring) is studied in adult individuals with and without AS. The results confirmed that dynamics facilitates the recognition of emotional facial expressions. This effect, however, was apparent only with the facial animation stimuli lacking detailed static facial features and other emotional cues and with blurred human faces. Some dynamic emotional animations were recognized drastically better than static ones. With basic expressions posed by human actors, the advantage of dynamic vs. static displays increased as a function of the blur level. Participants with and without AS performed similarly in recognizing basic emotions from original non-filtered and from dynamic vs. static facial expressions, suggesting that AS involves intact recognition of simple emotional states and movement from faces. Participants with AS were affected more by the removal of fine details than participants without AS. This result supports a "weak central coherence" account suggesting that AS and other autistic spectrum disorders are characterized by general perceptual difficulties in processing global vs. local level features.Kasvonilmeet ovat tärkeä osa sosiaalista vuorovaikutusta, erityisesti koska ne tekevät ääneen lausumattomien tunnetilojen ilmaisemisen ja havaitsemisen mahdolliseksi. Esimerkiksi sosiaalisen vuorovaikutuksen ongelmia sisältäviin neurokehityksellisiin oireyhtymiin, kuten Aspergerin Syndroomaan (AS), liittyykin usein vaikeuksia kasvoilla näkyvien tunnetilojen tulkitsemisessa. Liikkeen roolista tunneilmausten tunnistamisessa kasvoilta on olemassa vain vähän tietoa. On osoitettu, että dynaamiset perustunneilmaukset tunnistetaan staattisia paremmin tietokoneanimoiduilta kasvoilta, vastaavaa tulosta ei ole kuitenkaan varmennettu ihmiskasvoilla. Tässä väitöskirjassa tutkitaan liikkeen roolia perustunneilmausten tunnistamisessa animoiduilta- ja ihmiskasvoilta. Ihmiskasvojen tapauksessa tutkitaan vuorovaikutusta liikkeen ja alipäästösuodatuksen (sumennuksen) kautta tapahtuvan tarkkojen yksityiskohtien poistamisen välillä. Tätä kysymystä tutkitaan lisäksi erikseen henkilöillä, joilla ei ole viitteitä AS:sta ja henkilöillä joilla on todettu AS. Tulokset vahvistivat, että liike edesauttaa tunneilmausten tunnistamista kasvoilta. Tämä tulos oli kuitenkin havaittavissa vain käytetyillä kasvoanimaatioilla, joista puuttui kasvojen tarkkoja yksityiskohtia ja muita tunteisiin liittyviä vihjeitä sekä sumennetuilla ihmiskasvoilla. Jotkin dynaamiset tunneanimaatiot tunnistettiin huomattavasti staattisia paremmin. Ihmisnäyttelijöiden esittämien perustunneilmausten tapauksessa, liikkeen tuoma lisähyöty kasvoi käytetyn sumennustason funktiona. Osallistujat, joilla oli todettu AS, tunnistivat perustunneilmauksia yhtä hyvin alkuperäisiltä ei-sumennetuilta kasvoilta ja dynaamisilta vs. staattisilta kasvoilta kuin muutkin osallistujat. Tulokset antavat viitteitä vahingoittumasta yksinkertaisten tunneilmausten ja liikkeen tunnistamisesta kasvoilta Aspergerin Syndroomassa. Osallistujat, joilla oli AS, suoriutuivat muita osallistujia heikommin, kun esitetyistä ärsykkeistä oli poistettu tarkkoja yksityiskohtia. Tämä tulos on yhdenmukainen "heikoksi keskeiseksi koherenssiksi" nimetyn näkemyksen kanssa, jonka mukaan AS:aan ja muihin autismin kirjon häiriöihin liittyy havaitsemistason vaikeuksia yleisten vs. tarkkojen piirteiden prosessoinnissa.reviewe

    Factors affecting the utility of emotional stimuli in research

    Get PDF
    Emotional stimuli such as images, words, or video clips are often used in studies researching emotion. These stimuli are provided to the research community in sets accompanied by normative rating data indicating the emotional value of each stimulus. With emotional stimuli sets continuously being published, an immense number of available sets are complicating the task for researchers looking for suitable stimuli. Therefore, a systematic review was conducted to find all existing emotional stimuli sets that are freely available or available upon request. The result was the creation of the KAPODI-database containing 364 sets and presenting a comprehensive list of set characteristics. A searchable online version allows researchers to find and compare individual sets, as well as to add new published sets. Previous research has shown that factors such as assessors’ age, gender, or ethnicity, influence stimulus perception. Nevertheless, researchers often rely on the provided normative rating data without verifying its validity for their own participant sample. Additionally, findings regarding the effect of emotions onto memory are inconsistent, with sometimes enhancing, and sometimes detrimental effects. A possible reason for these contradictory results could be factors influencing stimulus validity that have yet not been investigated. Therefore, two additional studies were conducted. A first study sought to analyse these possible factors by investigating validity of stimuli in relation to assessed dimension, namely valence (negative to positive), arousal (calming to exciting), and dominance (no dominance to high dominance), as well as different dimension categories (e.g., low/medium/high valence, arousal, and dominance, respectively), and standard deviation (SD) categories (low/medium/high) both for images and words. In the second study, the factor of sensory processing sensitivity (SPS) that is known to positively correlate with the depth of processing of emotional content was investigated. In this latter experiment perception as well as episodic recognition of emotional image stimuli were assessed and analysed in relation to level of SPS. The two experimental studies suggest that solely valence seem reliable for both image and word stimuli, while arousal, dominance, dimension, and SD category are not reliable. Moreover, perception of emotional stimuli differs between individuals of low vs. high SPS regarding low-valence stimuli only, with high SPS individuals perceiving these stimuli as more negative. Finally, recognition of stimuli increased with increasing arousal, and decreased with increasing valence. Together, these results urge researchers to validate arousal and dominance ratings of selected stimuli for their participant sample prior to study conduction, as well as to consider the participants’ sensitivity if the study uses negative (low-valenced) stimuli

    The Properties of DaFEx, a Database of Kinetic Facial Expressions

    No full text
    In this paper we present an evaluation study for DaFEx (Database of Facial Expressions), a database created with the purpose of providing a benchmark for the evaluation of the facial expressivity of Embodied Conversational Agents (ECAs). DaFEx consists of 1008 short videos containing emotional facial expressions of the 6 Ekman’s emotions plus the neutral expression. The facial expressions were recorded by 8 professional actors (male and female) in two acting conditions (“utterance” and “non utterance”) and at 3 intensity levels (high, medium, low). The properties of DaFEx were studied by having 80 subjects classify the emotion expressed in the videos. We tested the effect of the intensity level, of the articulatory movements due to speech, and of the actors’ and subjects’ gender, on classification accuracy. We also studied the way error distribute across confusion classes. The results are summarized in this work
    corecore