25 research outputs found

    Facial Emotion Recognition Using Context Based Multimodal Approach

    Get PDF
    Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of humancomputer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

    Bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior

    Full text link
    To be able to develop and test robust affective multimodal systems, researchers need access to novel databases containing representative samples of human multi-modal expressive behavior. The creation of such databases requires a major effort in the definition of representative behaviors, the choice of expressive modalities, and the collection and labeling of large amount of data. At present, public databases only exist for single expressive modalities such as facial expression analysis. There also exist a number of gesture databases of static and dynamic hand postures and dynamic hand gestures. However, there is not a readily available database combining affective face and body information in a genuine bimodal manner. Accordingly, in this paper, we present a bimodal database recorded by two high-resolution cameras simultaneously for use in automatic analysis of human nonverbal affective behavior. © 2006 IEEE

    A Database of Full Body Virtual Interactions Annotated with Expressivity Scores

    Get PDF
    Abstract Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided

    Towards affective computing that works for everyone

    Full text link
    Missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. A literature review reveals how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. Our work analyzes existing affective computing datasets and highlights a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.Comment: 8 pages, 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII

    モーション キャプチャー オ モチイタ ジョウドウ ドウサ シゲキ ノ サクセイ

    Get PDF
    喜びや悲しみなどの情動を表す動作はコミュニケーションを支えているが、これを検討するための動画刺激は調べ得た範囲で本邦には存在しない。動作による情動認知は文化差があることも考えられることから、本研究では日本人演者による情動動作刺激を作成することを目的とした。また、動作による情動認知は顔によるそれとは別の機序が働いている可能性があるため、顔など動作以外の情報とは別個に分析をする必要がある。本検討ではモーションキャプチャーを用いて、顔や人物などの情報を取り除いた刺激を作成した。これを用いて、若年健常者において情動評定を行い、刺激の妥当性を検討した。結果として、すべての動作について、作成において意図された情動の評定が最も高い値となった。恐怖動作に対して驚きの評定が高く、怒り動作に対して嫌悪の評定が高い点など、今後の検討課題も発見された。To fully understand the nature of emotional communication, dynamic stimuli such as moviesare useful. However, in Japan, there are no emotional gesture movie stimuli, as far as has been reported.Because emotion recognition from bodily movement may have a different cognitive basis from facial emotion recognition, it is important to eliminate the influence of face and other factors than body movement from the stimuli. To this end, motion capture technology was used to develop the gesture model made of frame and point. Healthy men and women in their twenties rated the emotional intensity of the stimuli via an internet survey. In general, created stimuli obtained high ratings showing that they were recognized as intended emotion. The fear gesture was confused as surprise and the anger gesture as disgust. The availability andfuture issues related to the stimuli are discussed

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    Affective Human-Humanoid Interaction Through Cognitive Architecture

    Get PDF
    corecore