22,385 research outputs found

    Representation Analysis Methods to Model Context for Speech Technology

    Get PDF
    Speech technology has developed to levels equivalent with human parity through the use of deep neural networks. However, it is unclear how the learned dependencies within these networks can be attributed to metrics such as recognition performance. This research focuses on strategies to interpret and exploit these learned context dependencies to improve speech recognition models. Context dependency analysis had not yet been explored for speech recognition networks. In order to highlight and observe dependent representations within speech recognition models, a novel analysis framework is proposed. This analysis framework uses statistical correlation indexes to compute the coefficiency between neural representations. By comparing the coefficiency of neural representations between models using different approaches, it is possible to observe specific context dependencies within network layers. By providing insights on context dependencies it is then possible to adapt modelling approaches to become more computationally efficient and improve recognition performance. Here the performance of End-to-End speech recognition models are analysed, providing insights on the acoustic and language modelling context dependencies. The modelling approach for a speaker recognition task is adapted to exploit acoustic context dependencies and reach comparable performance with the state-of-the-art methods, reaching 2.89% equal error rate using the Voxceleb1 training and test sets with 50% of the parameters. Furthermore, empirical analysis of the role of acoustic context for speech emotion recognition modelling revealed that emotion cues are presented as a distributed event. These analyses and results for speech recognition applications aim to provide objective direction for future development of automatic speech recognition systems

    End-to-End Multimodal Emotion Recognition using Deep Neural Networks

    Get PDF
    Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a Convolutional Neural Network (CNN) to extract features from the speech, while for the visual modality a deep residual network (ResNet) of 50 layers. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, Long Short-Term Memory (LSTM) networks are utilized. The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition

    Voice Activity Detection Based on Deep Neural Networks

    Get PDF
    Various ambient noises always corrupt the audio obtained in real-world environments, which partially prevents valuable information in human speech. Many speech processing systems, such as automatic speech recognition, speaker recognition and speech emotion recognition, have been widely used to transcribe and interpret the valuable information of human speech to other formats. However, ambient noise and different non-speech sounds in audio may affect the work of speech processing systems. Voice Activity Detection (VAD) acts as the front-end operation of these systems for filtering out undesired sounds. The general goal of VAD is to determine the presence and absence of human speech in audio signals. An effective VAD method can accurately detect human speech segments under low SNR conditions with any noise. In addition, an efficient VAD method meets the requirements of fewer parameters and computation. Recently, deep learning-based approaches have impressive advancements in detection performance by training neural networks with massive data. However, commonly-used neural networks generally contain millions of parameters and require large amounts of computation, which is not feasible for computationally-constrained devices. Besides, most deep learning-based approaches adopt manual acoustic features to highlight characteristics of human speech. But manual features may not be suitable for VAD in some specific scenarios. For example, some acoustic features are hard to discriminate babble noise from target speech when audio is recorded in a crowd. In this thesis, we first propose a computation-efficient VAD neural network using multi-channel features. Multi-channel features allow convolutional kernels to capture contextual and dynamic information simultaneously. The positional mask provides the features with positional information using the positional encoding technique, which requires no trainable parameter and costs negligible computation. The computation-efficient neural network contains convolutional layers, bottleneck layers and a fully-connected layer. In bottleneck layers, channel-attention inverted blocks effectively learn hidden patterns of multi-channel features with acceptable computation cost by adopting depthwise separable convolutions and the channel-attention mechanism. Experiments indicate that the proposed computation-efficient neural network achieves superior performance while requiring a fewer amount of computation compared to baseline methods. We propose an end-to-end VAD model that can learn acoustic features directly from raw audio data. The end-to-end VAD model consists of three main parts: a feature extractor, dual-attention transformer encoder and classifier. The feature extractor employs a condense block to learn acoustic features from raw data. The dual-attention transformer encoder uses dual-path attention to encode local and global information of learned features while maintaining low complexity by utilizing the linear multi-head attention mechanism. The classifier requires few trainable parameters and few amounts of computation due to the non-MLP design. The proposed end-to-end model impressively outperforms the computation-efficient neural network and other baseline methods by a considerable margin

    Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives

    Get PDF
    Over the past few years, adversarial training has become an extremely active research topic and has been successfully applied to various Artificial Intelligence (AI) domains. As a potentially crucial technique for the development of the next generation of emotional AI systems, we herein provide a comprehensive overview of the application of adversarial training to affective computing and sentiment analysis. Various representative adversarial training algorithms are explained and discussed accordingly, aimed at tackling diverse challenges associated with emotional AI systems. Further, we highlight a range of potential future research directions. We expect that this overview will help facilitate the development of adversarial training for affective computing and sentiment analysis in both the academic and industrial communities

    Multimodal Speech Emotion Recognition Using Audio and Text

    Full text link
    Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.Comment: 7 pages, Accepted as a conference paper at IEEE SLT 201
    corecore