16 research outputs found

    Multimodal emotion recognition system for spontaneous vocal and facial signals: SMERFS

    No full text
    Human computer interaction is moving towards giving computers the ability to adapt and give feedback in accordance to a user\u27s emotion. Initial researches on multimodal emotion recognition shows that combining both vocal and facial signals performed better compared to using physiological signals. In addition, majority of the emotion corpus used on both unimodal and multimodal systems were modeled based on acted data using actors that tend to exaggerate emotions. This study improves the accuracy of single modality systems by developing a multimodal emotion recognition system through vocal and facial expressions using a spontaneous emotion corpus. FilMED2, which contains spontaneous television clips from reality television shows, is the corpus used in this study. The clips contain discrete emotion labels where they are only labeled as happiness, sadness, anger, fear and neutral. The system makes use of the facial feature points and prosodic features which include pitch and energy that will undergo machine learning for classification. SVM is the machine learning technique used for classification and was first tested on each modality for both acted and spontaneous corpus. The acted corpus yielded higher results as compared to when using the spontaneous corpus for both modalities. Both modalities were then combined using decision-level fusion. Using solely the face gave 60% accuracy while using solely the voice gave 32% accuracy. Combining both results with a weight-distribution of 75% face and 25% voice gave an accuracy rate of 80%

    Multimodal emotion recognition using a spontaneous Filipino emotion database

    No full text
    Human-computer interaction is moving towards giving computers the ability to adapt and give feedback in accordance to a user\u27s emotion. Studies on emotion recognition show that combining face and voice signals produce higher recognition rates compared to using either one individually. In addition, majority of the emotion corpus used on these systems were modeled based on acted data with actors who tend to exaggerate emotions. This study focus on the development of a multimodal emotion recognition system that is trained using a spontaneous Filipino emotion database. The system extracts voice features and facial features that are then classified into the correct emotion label using support vector machines. Based on test results, recognizing emotions using voice only yielded 40% accuracy; using face only, 86%; and using a combination of voice and face yielded 80%. © 2010 IEEE
    corecore