6,924 research outputs found
Recommended from our members
Efficient smile detection by Extreme Learning Machine
Smile detection is a specialized task in facial expression analysis with applications such as photo selection, user experience analysis, and patient monitoring. As one of the most important and informative expressions, smile conveys the underlying emotion status such as joy, happiness, and satisfaction. In this paper, an efficient smile detection approach is proposed based on Extreme Learning Machine (ELM). The faces are first detected and a holistic flow-based face registration is applied which does not need any manual labeling or key point detection. Then ELM is used to train the classifier. The proposed smile detector is tested with different feature descriptors on publicly available databases including real-world face images. The comparisons against benchmark classifiers including Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) suggest that the proposed ELM based smile detector in general performs better and is very efficient. Compared to state-of-the-art smile detector, the proposed method achieves competitive results without preprocessing and manual registration
Automatic emotional state detection using facial expression dynamic in videos
In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states.
The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems
Emotion Recognition from Acted and Spontaneous Speech
DizertaÄnĂ prĂĄce se zabĂœvĂĄ rozpoznĂĄnĂm emoÄnĂho stavu mluvÄĂch z ĆeÄovĂ©ho signĂĄlu. PrĂĄce je rozdÄlena do dvou hlavnĂch ÄastĂ, prvnĂ ÄĂĄst popisuju navrĆŸenĂ© metody pro rozpoznĂĄnĂ emoÄnĂho stavu z hranĂœch databĂĄzĂ. V rĂĄmci tĂ©to ÄĂĄsti jsou pĆedstaveny vĂœsledky rozpoznĂĄnĂ pouĆŸitĂm dvou rĆŻznĂœch databĂĄzĂ s rĆŻznĂœmi jazyky. HlavnĂmi pĆĂnosy tĂ©to ÄĂĄsti je detailnĂ analĂœza rozsĂĄhlĂ© ĆĄkĂĄly rĆŻznĂœch pĆĂznakĆŻ zĂskanĂœch z ĆeÄovĂ©ho signĂĄlu, nĂĄvrh novĂœch klasifikaÄnĂch architektur jako je napĆĂklad âemoÄnĂ pĂĄrovĂĄnĂâ a nĂĄvrh novĂ© metody pro mapovĂĄnĂ diskrĂ©tnĂch emoÄnĂch stavĆŻ do dvou dimenzionĂĄlnĂho prostoru. DruhĂĄ ÄĂĄst se zabĂœvĂĄ rozpoznĂĄnĂm emoÄnĂch stavĆŻ z databĂĄze spontĂĄnnĂ ĆeÄi, kterĂĄ byla zĂskĂĄna ze zĂĄznamĆŻ hovorĆŻ z reĂĄlnĂœch call center. Poznatky z analĂœzy a nĂĄvrhu metod rozpoznĂĄnĂ z hranĂ© ĆeÄi byly vyuĆŸity pro nĂĄvrh novĂ©ho systĂ©mu pro rozpoznĂĄnĂ sedmi spontĂĄnnĂch emoÄnĂch stavĆŻ. JĂĄdrem navrĆŸenĂ©ho pĆĂstupu je komplexnĂ klasifikaÄnĂ architektura zaloĆŸena na fĂșzi rĆŻznĂœch systĂ©mĆŻ. PrĂĄce se dĂĄle zabĂœvĂĄ vlivem emoÄnĂho stavu mluvÄĂho na ĂșspÄĆĄnosti rozpoznĂĄnĂ pohlavĂ a nĂĄvrhem systĂ©mu pro automatickou detekci ĂșspÄĆĄnĂœch hovorĆŻ v call centrech na zĂĄkladÄ analĂœzy parametrĆŻ dialogu mezi ĂșÄastnĂky telefonnĂch hovorĆŻ.Doctoral thesis deals with emotion recognition from speech signals. The thesis is divided into two main parts; the first part describes proposed approaches for emotion recognition using two different multilingual databases of acted emotional speech. The main contributions of this part are detailed analysis of a big set of acoustic features, new classification schemes for vocal emotion recognition such as âemotion couplingâ and new method for mapping discrete emotions into two-dimensional space. The second part of this thesis is devoted to emotion recognition using multilingual databases of spontaneous emotional speech, which is based on telephone records obtained from real call centers. The knowledge gained from experiments with emotion recognition from acted speech was exploited to design a new approach for classifying seven emotional states. The core of the proposed approach is a complex classification architecture based on the fusion of different systems. The thesis also examines the influence of speakerâs emotional state on gender recognition performance and proposes system for automatic identification of successful phone calls in call center by means of dialogue features.
Machine Analysis of Facial Expressions
No abstract
Facial emotion recognition using min-max similarity classifier
Recognition of human emotions from the imaging templates is useful in a wide
variety of human-computer interaction and intelligent systems applications.
However, the automatic recognition of facial expressions using image template
matching techniques suffer from the natural variability with facial features
and recording conditions. In spite of the progress achieved in facial emotion
recognition in recent years, the effective and computationally simple feature
selection and classification technique for emotion recognition is still an open
problem. In this paper, we propose an efficient and straightforward facial
emotion recognition algorithm to reduce the problem of inter-class pixel
mismatch during classification. The proposed method includes the application of
pixel normalization to remove intensity offsets followed-up with a Min-Max
metric in a nearest neighbor classifier that is capable of suppressing feature
outliers. The results indicate an improvement of recognition performance from
92.85% to 98.57% for the proposed Min-Max classification method when tested on
JAFFE database. The proposed emotion recognition technique outperforms the
existing template matching methods
- âŠ