2,706 research outputs found

    Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives

    Get PDF
    Over the past few years, adversarial training has become an extremely active research topic and has been successfully applied to various Artificial Intelligence (AI) domains. As a potentially crucial technique for the development of the next generation of emotional AI systems, we herein provide a comprehensive overview of the application of adversarial training to affective computing and sentiment analysis. Various representative adversarial training algorithms are explained and discussed accordingly, aimed at tackling diverse challenges associated with emotional AI systems. Further, we highlight a range of potential future research directions. We expect that this overview will help facilitate the development of adversarial training for affective computing and sentiment analysis in both the academic and industrial communities

    DeepTMH: Multimodal Semi-supervised framework leveraging Affective and Cognitive engagement for Telemental Health

    Full text link
    To aid existing telemental health services, we propose DeepTMH, a novel framework that models telemental health session videos by extracting latent vectors corresponding to Affective and Cognitive features frequently used in psychology literature. Our approach leverages advances in semi-supervised learning to tackle the data scarcity in the telemental health session video domain and consists of a multimodal semi-supervised GAN to detect important mental health indicators during telemental health sessions. We demonstrate the usefulness of our framework and contrast against existing works in two tasks: Engagement regression and Valence-Arousal regression, both of which are important to psychologists during a telemental health session. Our framework reports 40% improvement in RMSE over SOTA method in Engagement Regression and 50% improvement in RMSE over SOTA method in Valence-Arousal Regression. To tackle the scarcity of publicly available datasets in telemental health space, we release a new dataset, MEDICA, for mental health patient engagement detection. Our dataset, MEDICA consists of 1299 videos, each 3 seconds long. To the best of our knowledge, our approach is the first method to model telemental health session data based on psychology-driven Affective and Cognitive features, which also accounts for data sparsity by leveraging a semi-supervised setup

    Multitask Learning from Augmented Auxiliary Data for Improving Speech Emotion Recognition

    Full text link
    Despite the recent progress in speech emotion recognition (SER), state-of-the-art systems lack generalisation across different conditions. A key underlying reason for poor generalisation is the scarcity of emotion datasets, which is a significant roadblock to designing robust machine learning (ML) models. Recent works in SER focus on utilising multitask learning (MTL) methods to improve generalisation by learning shared representations. However, most of these studies propose MTL solutions with the requirement of meta labels for auxiliary tasks, which limits the training of SER systems. This paper proposes an MTL framework (MTL-AUG) that learns generalised representations from augmented data. We utilise augmentation-type classification and unsupervised reconstruction as auxiliary tasks, which allow training SER systems on augmented data without requiring any meta labels for auxiliary tasks. The semi-supervised nature of MTL-AUG allows for the exploitation of the abundant unlabelled data to further boost the performance of SER. We comprehensively evaluate the proposed framework in the following settings: (1) within corpus, (2) cross-corpus and cross-language, (3) noisy speech, (4) and adversarial attacks. Our evaluations using the widely used IEMOCAP, MSP-IMPROV, and EMODB datasets show improved results compared to existing state-of-the-art methods.Comment: Under review IEEE Transactions on Affective Computin
    • …
    corecore