4,472 research outputs found
Convolutional RNN: an Enhanced Model for Extracting Features from Sequential Data
Traditional convolutional layers extract features from patches of data by
applying a non-linearity on an affine function of the input. We propose a model
that enhances this feature extraction process for the case of sequential data,
by feeding patches of the data into a recurrent neural network and using the
outputs or hidden states of the recurrent units to compute the extracted
features. By doing so, we exploit the fact that a window containing a few
frames of the sequential data is a sequence itself and this additional
structure might encapsulate valuable information. In addition, we allow for
more steps of computation in the feature extraction process, which is
potentially beneficial as an affine function followed by a non-linearity can
result in too simple features. Using our convolutional recurrent layers we
obtain an improvement in performance in two audio classification tasks,
compared to traditional convolutional layers. Tensorflow code for the
convolutional recurrent layers is publicly available in
https://github.com/cruvadom/Convolutional-RNN
Multimodal Speech Emotion Recognition Using Audio and Text
Speech emotion recognition is a challenging task, and extensive reliance has
been placed on models that use audio features in building well-performing
classifiers. In this paper, we propose a novel deep dual recurrent encoder
model that utilizes text data and audio signals simultaneously to obtain a
better understanding of speech data. As emotional dialogue is composed of sound
and spoken content, our model encodes the information from audio and text
sequences using dual recurrent neural networks (RNNs) and then combines the
information from these sources to predict the emotion class. This architecture
analyzes speech data from the signal level to the language level, and it thus
utilizes the information within the data more comprehensively than models that
focus on audio features. Extensive experiments are conducted to investigate the
efficacy and properties of the proposed model. Our proposed model outperforms
previous state-of-the-art methods in assigning data to one of four emotion
categories (i.e., angry, happy, sad and neutral) when the model is applied to
the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.Comment: 7 pages, Accepted as a conference paper at IEEE SLT 201
Stacked Convolutional and Recurrent Neural Networks for Music Emotion Recognition
This paper studies the emotion recognition from musical tracks in the
2-dimensional valence-arousal (V-A) emotional space. We propose a method based
on convolutional (CNN) and recurrent neural networks (RNN), having
significantly fewer parameters compared with the state-of-the-art method for
the same task. We utilize one CNN layer followed by two branches of RNNs
trained separately for arousal and valence. The method was evaluated using the
'MediaEval2015 emotion in music' dataset. We achieved an RMSE of 0.202 for
arousal and 0.268 for valence, which is the best result reported on this
dataset.Comment: Accepted for Sound and Music Computing (SMC 2017
- …