870 research outputs found

    Multispace behavioral model for face-based affective social agents

    Get PDF
    This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios

    Facial actions as visual cues for personality

    Get PDF
    What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in brief videos. Human viewers rated the personalities of these characters using a well-standardized adjective rating system borrowed from the psychological literature. These personality descriptors are organized in a multidimensional space that is based on the orthogonal dimensions of Desire for Affiliation and Displays of Social Dominance. The main result of the personality rating data was that human viewers associated individual facial actions and emotional expressions with specific personality characteristics very reliably. In particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the Dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along the Affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased the perceived Social Dominance of the characters. We interpret these results as pointing to a reliable link between animated facial actions/expressions and the personality attributions they evoke in human viewers. The paper shows how these findings are used in our facial animation system to create perceptually valid personality profiles based on Dominance and Affiliation as two parameters that control the facial actions of autonomous animated characters

    Critical Analysis on Multimodal Emotion Recognition in Meeting the Requirements for Next Generation Human Computer Interactions

    Get PDF
    Emotion recognition is the gap in today’s Human Computer Interaction (HCI). These systems lack the ability to effectively recognize, express and feel emotion limits in their human interaction. They still lack the better sensitivity to human emotions. Multi modal emotion recognition attempts to addresses this gap by measuring emotional state from gestures, facial expressions, acoustic characteristics, textual expressions. Multi modal data acquired from video, audio, sensors etc. are combined using various techniques to classify basis human emotions like happiness, joy, neutrality, surprise, sadness, disgust, fear, anger etc. This work presents a critical analysis of multi modal emotion recognition approaches in meeting the requirements of next generation human computer interactions. The study first explores and defines the requirements of next generation human computer interactions and critically analyzes the existing multi modal emotion recognition approaches in addressing those requirements

    The influence of diatopic variation and first language on the perception of emotions in the Basque language

    Get PDF
    El objetivo de este trabajo es determinar si la variación diatópica y la lengua materna influyen en la percepción de emociones en la lengua vasca. Para ello se grabó un corpus oral en el que se recogieron las tres emociones básicas simuladas por mujeres jóvenes procedentes del País Vasco. El corpus constaba de 24 grabaciones orales que producían la misma frase semánticamente neutra, con este corpus se diseñó un test específico para la investigación, en el que los 349 participantes jóvenes de los siete territorios del País Vasco divididos según la lengua materna en dos grupos, el grupo A (euskera) y el B (español o francés) tenían que elegir la emoción básica a la que correspondía lo escuchado. Según los resultados del test la emoción mejor percibida fue la tristeza y la peor el enfado, esto se debe a la mayor proximidad fonética entre ambas. En cuanto a la lengua materna de los participantes se observaron diferencias estadísticamente significativas, ya que los del grupo A presentaron mayor porcentaje de aciertos que los del grupo B. Según la procedencia, los que más aciertos tuvieron fueron los del País Vasco continental y los que menos los de Navarra; de todas formas, no se dan diferencias estadísticamente significativas referentes a esta variable.The aim of this study is to determine the influence of diatopic variation and first language on emotions perception in Basque language. For this purpose simulated emotions have to be recorded from some young women from Basque Country. The corpus features 24 oral recordings, is the same phrase and semantically neutral, with this corpus was designed a specific test for research. 349 young participants from seven Basque territories were divided into two groups according to first language, some of them belong to group A (Basque) and the others to group B (Spanish or French), and these participants had to identify the intended basic emotion from an audio. According to the test results, sadness is better perceived emotion, and anger the worst, because of they are phonetically more similar. With regard to participants’ first language, differences are statistically significant, inasmuch as A group participants correctly matched more items than group B. According to their source, Northern Basque Country correctly matched more items than the participants from Navarra; anyway the current results suggest that the differences are not statistically significant

    A survey on perceived speaker traits: personality, likability, pathology, and the first challenge

    Get PDF
    The INTERSPEECH 2012 Speaker Trait Challenge aimed at a unified test-bed for perceived speaker traits – the first challenge of this kind: personality in the five OCEAN personality dimensions, likability of speakers, and intelligibility of pathologic speakers. In the present article, we give a brief overview of the state-of-the-art in these three fields of research and describe the three sub-challenges in terms of the challenge conditions, the baseline results provided by the organisers, and a new openSMILE feature set, which has been used for computing the baselines and which has been provided to the participants. Furthermore, we summarise the approaches and the results presented by the participants to show the various techniques that are currently applied to solve these classification tasks

    Pragmatics and Prosody

    Get PDF
    Most of the papers collected in this book resulted from presentations and discussions undertaken during the V Lablita Workshop that took place at the Federal University of Minas Gerais, Brazil, on August 23-25, 2011. The workshop was held in conjunction with the II Brazilian Seminar on Pragmatics and Prosody. The guiding themes for the joint event were illocution, modality, attitude, information patterning and speech annotation. Thus, all papers presented here are concerned with theoretical and methodological issues related to the study of speech. Among the papers in this volume, there are different theoretical orientations, which are mirrored through the methodological designs of studies pursued. However, all papers are based on the analysis of actual speech, be it from corpora or from experimental contexts trying to emulate natural speech. Prosody is the keyword that comes out from all the papers in this publication, which indicates the high standing of this category in relation to studies that are geared towards the understanding of major elements that are constitutive of the structuring of speech

    Perceptual aspects of voice-source parameters

    Get PDF
    xii+114hlm.;24c
    corecore