2,484 research outputs found
Recommended from our members
Automatic affective dimension recognition from naturalistic facial expressions based on wavelet filtering and PLS regression
Automatic affective dimension recognition from facial expression continuously in naturalistic contexts is a very challenging research topic but very important in human-computer interaction. In this paper, an automatic recognition system was proposed to predict the affective dimensions such as Arousal, Valence and Dominance continuously in naturalistic facial expression videos. Firstly, visual and vocal features are extracted from image frames and audio segments in facial expression videos. Secondly, a wavelet transform based digital filtering method is applied to remove the irrelevant noise information in the feature space. Thirdly, Partial Least Squares regression is used to predict the affective dimensions from both video and audio modalities. Finally, two modalities are combined to boost overall performance in the decision fusion process. The proposed method is tested in the fourth international Audio/Visual Emotion Recognition Challenge (AVEC2014) dataset and compared to other state-of-the-art methods in the affect recognition sub-challenge with a good performance
Facial emotion recognition using min-max similarity classifier
Recognition of human emotions from the imaging templates is useful in a wide
variety of human-computer interaction and intelligent systems applications.
However, the automatic recognition of facial expressions using image template
matching techniques suffer from the natural variability with facial features
and recording conditions. In spite of the progress achieved in facial emotion
recognition in recent years, the effective and computationally simple feature
selection and classification technique for emotion recognition is still an open
problem. In this paper, we propose an efficient and straightforward facial
emotion recognition algorithm to reduce the problem of inter-class pixel
mismatch during classification. The proposed method includes the application of
pixel normalization to remove intensity offsets followed-up with a Min-Max
metric in a nearest neighbor classifier that is capable of suppressing feature
outliers. The results indicate an improvement of recognition performance from
92.85% to 98.57% for the proposed Min-Max classification method when tested on
JAFFE database. The proposed emotion recognition technique outperforms the
existing template matching methods
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
Towards Indonesian Speech-Emotion Automatic Recognition (I-SpEAR)
Even though speech-emotion recognition (SER) has been receiving much
attention as research topic, there are still some disputes about which vocal
features can identify certain emotion. Emotion expression is also known to be
differed according to the cultural backgrounds that make it important to study
SER specific to the culture where the language belongs to. Furthermore, only a
few studies addresses the SER in Indonesian which what this study attempts to
explore. In this study, we extract simple features from 3420 voice data
gathered from 38 participants. The features are compared by means of linear
mixed effect model which shows that people who are in emotional and
non-emotional state can be differentiated by their speech duration. Using SVM
and speech duration as input feature, we achieve 76.84% average accuracy in
classifying emotional and non-emotional speech.Comment: 4 pages, 3 tables, published in 4th International Conference on New
Media (Conmedia) on 8-10 Nov. 2017 (http://conmedia.umn.ac.id/) [in print as
in Sept. 17, 2017
Optimal set of EEG features for emotional state classification and trajectory visualization in Parkinson's disease
In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders
- …