2 research outputs found

    Indian EmoSpeech Command Dataset: A dataset for emotion based speech recognition in the wild

    Full text link
    Speech emotion analysis is an important task which further enables several application use cases. The non-verbal sounds within speech utterances also play a pivotal role in emotion analysis in speech. Due to the widespread use of smartphones, it becomes viable to analyze speech commands captured using microphones for emotion understanding by utilizing on-device machine learning models. The non-verbal information includes the environment background sounds describing the type of surroundings, current situation and activities being performed. In this work, we consider both verbal (speech commands) and non-verbal sounds (background noises) within an utterance for emotion analysis in real-life scenarios. We create an indigenous dataset for this task namely "Indian EmoSpeech Command Dataset". It contains keywords with diverse emotions and background sounds, presented to explore new challenges in audio analysis. We exhaustively compare with various baseline models for emotion analysis on speech commands on several performance metrics. We demonstrate that we achieve a significant average gain of 3.3% in top-one score over a subset of speech command dataset for keyword spotting

    Performance of the Vocal Source Related Features from the Linear Prediction Residual Signal in Speech Emotion Recognition

    Get PDF
    Researchers concerned with Speech Emotion Recognition have proposed various useful features associated with their performance analysis related to emotions. However, a majority of the studies rely on acoustic features, characterized by vocal tract responses. The usefulness of vocal source related features has not been extensively explored, even though they are expected to convey useful emotion-related information. In this research, we study the significance of vocal source related features in Speech Emotion Recognition and assess the comparative performance of vocal source related features and vocal tract related features in emotion identification. The vocal source related features are extracted from the Linear Prediction residuals. The study shows that the vocal source related features contain emotion discriminant information and integrating them with vocal tract related features leads to performance improvement in emotion recognition rate
    corecore