8 research outputs found

    Robust continuous prediction of human emotions using multiscale dynamic cues

    Get PDF
    Designing systems able to interact with humans in a natural manner is a complex and far from solved problem. A key aspect of natural interaction is the ability to understand and appropriately respond to human emotions. This paper details our response to the Audio/Visual Emotion Challenge (AVEC’12) whose goal is to continuously predict four affective signals describing human emotions (namely valence, arousal, expectancy and power). The proposed method uses log-magnitude Fourier spectra to extract multiscale dynamic descriptions of signals characterizing global and local face appearance as well as head movements and voice. We perform a kernel regression with very few representative samples selected via a supervised weighted-distance-based clustering, that leads to a high generalization power. For selecting features, we introduce a new correlation-based measure that takes into account a possible delay between the labels and the data and significantly increases robustness. We also propose a particularly fast regressor-level fusion framework to merge systems based on di↵erent modalities. Experiments have proven the e ciency of each key point of the proposed method and we obtain very promising results

    ICMI'12:Proceedings of the ACM SIGCHI 14th International Conference on Multimodal Interaction

    Get PDF

    SARIYANIDI et al.: LOCAL ZERNIKE MOMENTS FOR FACIAL AFFECT RECOGNITION 1 Local Zernike Moment Representation for Facial Affect Recognition

    Get PDF
    In this paper, we propose to use local Zernike Moments (ZMs) for facial affect recognition and introduce a representation scheme based on performing non-linear encoding on ZMs via quantization. Local ZMs provide a useful and compact description of image discontinuities and texture. We demonstrate the use of this ZM-based representation for posed and discrete as well as naturalistic and continuous affect recognition on standard datasets, and show that ZM-based representations outperform well-established alternative approaches for both tasks. To the best of our knowledge, the performance we achieved on CK+ dataset is superior to all results reported to date.

    Deep Siamese Neural Networks for Facial Expression Recognition in the Wild

    Get PDF
    The variation of facial images in the wild conditions due to head pose, face illumination, and occlusion can significantly affect the Facial Expression Recognition (FER) performance. Moreover, between subject variation introduced by age, gender, ethnic backgrounds, and identity can also influence the FER performance. This Ph.D. dissertation presents a novel algorithm for end-to-end facial expression recognition, valence and arousal estimation, and visual object matching based on deep Siamese Neural Networks to handle the extreme variation that exists in a facial dataset. In our main Siamese Neural Networks for facial expression recognition, the first network represents the classification framework, where we aim to achieve multi-class classification. The second network represents the verification framework, where we use pairwise similarity labels to map images to a feature space where similar inputs are close to each other, and dissimilar inputs are far from each other. Using Siamese architecture enabling us to obtain powerful discriminative features by taking full advantage of the training batches via our pairing strategy, and by dynamically transferring the learning from a local-adaptive verification space into a classification embedding space. These steps enable the algorithm to learn the state of the art features by optimizing the joint identification-verification embedding space. The verification model reduces the intra-class variation by minimizing the distance between the extracted features from the same identity using different strategies. In contrast, the identification model increases the inter-class variation by maximizing the distance between the features extracted from different classes. When a network is tuned carefully, we can rely on the powerful discriminative features to generalize the power of the network to unseen images. Further, we applied our proposed deep Siamese networks on two different challenging tasks in computer vision, valence and arousal estimation and visual object matching. The empirical results of the valence and arousal Siamese model demonstrate that transferring the learning from the classification space to the regression space enhances the regression task since each expression occupies a representation within a specified range of valence and arousal affect. On the other hand, Siamese model of visual object matching gives a better model performance since the classification framework helps to increase the inter-class variation in the verification framework. We evaluated the algorithm using state-of-the-art and challenging datasets such as AffectNet Mollahosseini et al. (2017), FERA2013 Goodfellow et al. (2013), categorical EmotioNet Du et al. (2014), and Cifar-100 Krizhevsky et al. (2009). To the best of our knowledge, this technique is the first to create a powerful recognition system by taking advantage of the features learned from different objective frameworks. We achieved comparable results with other deep learning models

    Automatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior.

    Full text link
    Emotion is a central part of human interaction, one that has a huge influence on its overall tone and outcome. Today's human-centered interactive technology can greatly benefit from automatic emotion recognition, as the extracted affective information can be used to measure, transmit, and respond to user needs. However, developing such systems is challenging due to the complexity of emotional expressions and their dynamics in terms of the inherent multimodality between audio and visual expressions, as well as the mixed factors of modulation that arise when a person speaks. To overcome these challenges, this thesis presents data-driven approaches that can quantify the underlying dynamics in audio-visual affective behavior. The first set of studies lay the foundation and central motivation of this thesis. We discover that it is crucial to model complex non-linear interactions between audio and visual emotion expressions, and that dynamic emotion patterns can be used in emotion recognition. Next, the understanding of the complex characteristics of emotion from the first set of studies leads us to examine multiple sources of modulation in audio-visual affective behavior. Specifically, we focus on how speech modulates facial displays of emotion. We develop a framework that uses speech signals which alter the temporal dynamics of individual facial regions to temporally segment and classify facial displays of emotion. Finally, we present methods to discover regions of emotionally salient events in a given audio-visual data. We demonstrate that different modalities, such as the upper face, lower face, and speech, express emotion with different timings and time scales, varying for each emotion type. We further extend this idea into another aspect of human behavior: human action events in videos. We show how transition patterns between events can be used for automatically segmenting and classifying action events. Our experimental results on audio-visual datasets show that the proposed systems not only improve performance, but also provide descriptions of how affective behaviors change over time. We conclude this dissertation with the future directions that will innovate three main research topics: machine adaptation for personalized technology, human-human interaction assistant systems, and human-centered multimedia content analysis.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133459/1/yelinkim_1.pd

    Affective Speech Recognition

    Get PDF
    Speech, as a medium of interaction, carries two different streams of information. Whereas one stream carries explicit messages, the other one contains implicit information about speakers themselves. Affective speech recognition is a set of theories and tools that intend to automate unfolding the part of the implicit stream that has to do with humans emotion. Application of affective speech recognition is to human computer interaction; a machine that is able to recognize humans emotion could engage the user in a more effective interaction. This thesis proposes a set of analyses and methodologies that advance automatic recognition of affect from speech. The proposed solution spans two dimensions of the problem: speech signal processing, and statistical learning. At the speech signal processing dimension, extraction of speech low-level descriptors is dis- cussed, and a set of descriptors that exploit the spectrum of the signal are proposed, which have shown to be particularly practical for capturing affective qualities of speech. Moreover, consider- ing the non-stationary property of the speech signal, further proposed is a measure of dynamicity that captures that property of speech by quantifying changes of the signal over time. Furthermore, based on the proposed set of low-level descriptors, it is shown that individual human beings are different in conveying emotions, and that parts of the spectrum that hold the affective information are different from one person to another. Therefore, the concept of emotion profile is proposed that formalizes those differences by taking into account different factors such as cultural and gender-specific differences, as well as those distinctions that have to do with individual human beings. At the statistical learning dimension, variable selection is performed to identify speech features that are most imperative to extracting affective information. In doing so, low-level descriptors are distinguished from statistical functionals, therefore, effectiveness of each of the two are studied dependently and independently. The major importance of variable selection as a standalone component of a solution is to real-time application of affective speech recognition. Although thousands of speech features are commonly used to tackle this problem in theory, extracting that many features in a real-time manner is unrealistic, especially for mobile applications. Results of the conducted investigations show that the required number of speech features is far less than the number that is commonly used in the literature of the problem. At the core of an affective speech recognition solution is a statistical model that uses speech features to recognize emotions. Such a model comes with a set of parameters that are estimated through a learning process. Proposed in this thesis is a learning algorithm, developed based on the notion of Hilbert-Schmidt independence criterion and named max-dependence regression, that maximizes the dependence between predicted and actual values of affective qualities. Pearson’s correlation coefficient is commonly used as the measure of goodness of a fit in the literature of affective computing, therefore max-dependence regression is proposed to make the learning and hypothesis testing criteria consistent with one another. Results of this research show that doing so results in higher prediction accuracy. Lastly, sparse representation for affective speech datasets is considered in this thesis. For this purpose, the application of a dictionary learning algorithm based on Hilbert-Schmidt independence criterion is proposed. Dictionary learning is used to identify the most important bases of the data in order to improve the generalization capability of the proposed solution to affective speech recognition. Based on the dictionary learning approach of choice, fusion of feature vectors is proposed. It is shown that sparse representation leads to higher generalization capability for affective speech recognition
    corecore