842 research outputs found
Feature extraction based on bio-inspired model for robust emotion recognition
Emotional state identification is an important issue to achieve more natural speech interactive systems. Ideally, these systems should also be able to work in real environments in which generally exist some kind of noise. Several bio-inspired representations have been applied to artificial systems for speech processing under noise conditions. In this work, an auditory signal representation is used to obtain a novel bio-inspired set of features for emotional speech signals. These characteristics, together with other spectral and prosodic features, are used for emotion recognition under noise conditions. Neural models were trained as classifiers and results were compared to the well-known mel-frequency cepstral coefficients. Results show that using the proposed representations, it is possible to significantly improve the robustness of an emotion recognition system. The results were also validated in a speaker independent scheme and with two emotional speech corpora.Fil: Albornoz, Enrique Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Rufiner, Hugo Leonardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin
Learning Representations of Emotional Speech with Deep Convolutional Generative Adversarial Networks
Automatically assessing emotional valence in human speech has historically
been a difficult task for machine learning algorithms. The subtle changes in
the voice of the speaker that are indicative of positive or negative emotional
states are often "overshadowed" by voice characteristics relating to emotional
intensity or emotional activation. In this work we explore a representation
learning approach that automatically derives discriminative representations of
emotional speech. In particular, we investigate two machine learning strategies
to improve classifier performance: (1) utilization of unlabeled data using a
deep convolutional generative adversarial network (DCGAN), and (2) multitask
learning. Within our extensive experiments we leverage a multitask annotated
emotional corpus as well as a large unlabeled meeting corpus (around 100
hours). Our speaker-independent classification experiments show that in
particular the use of unlabeled data in our investigations improves performance
of the classifiers and both fully supervised baseline approaches are
outperformed considerably. We improve the classification of emotional valence
on a discrete 5-point scale to 43.88% and on a 3-point scale to 49.80%, which
is competitive to state-of-the-art performance
Learning Speech Emotion Representations in the Quaternion Domain
The modeling of human emotion expression in speech signals is an important,
yet challenging task. The high resource demand of speech emotion recognition
models, combined with the the general scarcity of emotion-labelled data are
obstacles to the development and application of effective solutions in this
field. In this paper, we present an approach to jointly circumvent these
difficulties. Our method, named RH-emo, is a novel semi-supervised architecture
aimed at extracting quaternion embeddings from real-valued monoaural
spectrograms, enabling the use of quaternion-valued networks for speech emotion
recognition tasks. RH-emo is a hybrid real/quaternion autoencoder network that
consists of a real-valued encoder in parallel to a real-valued emotion
classifier and a quaternion-valued decoder. On the one hand, the classifier
permits to optimize each latent axis of the embeddings for the classification
of a specific emotion-related characteristic: valence, arousal, dominance and
overall emotion. On the other hand, the quaternion reconstruction enables the
latent dimension to develop intra-channel correlations that are required for an
effective representation as a quaternion entity. We test our approach on speech
emotion recognition tasks using four popular datasets: Iemocap, Ravdess, EmoDb
and Tess, comparing the performance of three well-established real-valued CNN
architectures (AlexNet, ResNet-50, VGG) and their quaternion-valued equivalent
fed with the embeddings created with RH-emo. We obtain a consistent improvement
in the test accuracy for all datasets, while drastically reducing the
resources' demand of models. Moreover, we performed additional experiments and
ablation studies that confirm the effectiveness of our approach. The RH-emo
repository is available at: https://github.com/ispamm/rhemo.Comment: Paper Submitted to IEEE/ACM Transactions on Audio, Speech and
Language Processin
Sentiment analysis in non-fixed length audios using a Fully Convolutional Neural Network
.In this work, a sentiment analysis method that is capable of accepting audio of any length, without being fixed a priori, is proposed. Mel spectrogram and Mel Frequency Cepstral Coefficients are used as audio description methods and a Fully Convolutional Neural Network architecture is proposed as a classifier. The results have been validated using three well known datasets: EMODB, RAVDESS and TESS. The results obtained were promising, outperforming the state-of–the-art methods. Also, thanks to the fact that the proposed method admits audios of any size, it allows a sentiment analysis to be made in near real time, which is very interesting for a wide range of fields such as call centers, medical consultations or financial brokers.S
- …