114,560 research outputs found
Employing Emotion Cues to Verify Speakers in Emotional Talking Environments
Usually, people talk neutrally in environments where there are no abnormal
talking conditions such as stress and emotion. Other emotional conditions that
might affect people talking tone like happiness, anger, and sadness. Such
emotions are directly affected by the patient health status. In neutral talking
environments, speakers can be easily verified, however, in emotional talking
environments, speakers cannot be easily verified as in neutral talking ones.
Consequently, speaker verification systems do not perform well in emotional
talking environments as they do in neutral talking environments. In this work,
a two-stage approach has been employed and evaluated to improve speaker
verification performance in emotional talking environments. This approach
employs speaker emotion cues (text-independent and emotion-dependent speaker
verification problem) based on both Hidden Markov Models (HMMs) and
Suprasegmental Hidden Markov Models (SPHMMs) as classifiers. The approach is
comprised of two cascaded stages that combines and integrates emotion
recognizer and speaker recognizer into one recognizer. The architecture has
been tested on two different and separate emotional speech databases: our
collected database and Emotional Prosody Speech and Transcripts database. The
results of this work show that the proposed approach gives promising results
with a significant improvement over previous studies and other approaches such
as emotion-independent speaker verification approach and emotion-dependent
speaker verification approach based completely on HMMs.Comment: Journal of Intelligent Systems, Special Issue on Intelligent
Healthcare Systems, De Gruyter, 201
Determination of Formant Features in Czech and Slovak for GMM Emotional Speech Classifier
The paper is aimed at determination of formant features (FF) which describe vocal tract characteristics. It comprises analysis of the first three formant positions together with their bandwidths and the formant tilts. Subsequently, the statistical evaluation and comparison of the FF was performed. This experiment was realized with the speech material in the form of sentences of male and female speakers expressing four emotional states (joy, sadness, anger, and a neutral state) in Czech and Slovak languages. The statistical distribution of the analyzed formant frequencies and formant tilts shows good differentiation between neutral and emotional styles for both voices. Contrary to it, the values of the formant 3-dB bandwidths have no correlation with the type of the speaking style or the type of the voice. These spectral parameters together with the values of the other speech characteristics were used in the feature vector for Gaussian mixture models (GMM) emotional speech style classifier that is currently developed. The overall mean classification error rate achieves about 18 %, and the best obtained error rate is 5 % for the sadness style of the female voice. These values are acceptable in this first stage of development of the GMM classifier that should be used for evaluation of the synthetic speech quality after applied voice conversion and emotional speech style transformation
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
How to improve TTS systems for emotional expressivity
Several experiments have been carried out that revealed weaknesses of the current Text-To-Speech (TTS) systems in their emotional expressivity. Although some TTS systems allow XML-based representations of prosodic and/or phonetic variables, few publications considered, as a pre-processing stage, the use of intelligent text processing to detect affective information that can be used to tailor the parameters needed for emotional expressivity. This paper describes a technique for an automatic prosodic parameterization based on affective clues. This technique recognizes the affective information conveyed in a text and, accordingly to its emotional connotation, assigns appropriate pitch accents and other prosodic parameters by XML-tagging. This pre-processing assists the TTS system to generate synthesized speech that contains emotional clues. The experimental results are encouraging and suggest the possibility of suitable emotional expressivity in speech synthesis
- âŠ