116 research outputs found

    Fusion for Audio-Visual Laughter Detection

    Get PDF
    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed by combining (fusing) the results of a separate audio and video classifier on the decision level. The video-classifier uses features based on the principal components of 20 tracked facial points, for audio we use the commonly used PLP and RASTA-PLP features. Our results indicate that RASTA-PLP features outperform PLP features for laughter detection in audio. We compared hidden Markov models (HMMs), Gaussian mixture models (GMMs) and support vector machines (SVM) based classifiers, and found that RASTA-PLP combined with a GMM resulted in the best performance for the audio modality. The video features classified using a SVM resulted in the best single-modality performance. Fusion on the decision-level resulted in laughter detection with a significantly better performance than single-modality classification

    Detection of nonverbal vocalizations using Gaussian Mixture Models: looking for fillers and laughter in conversational speech

    Get PDF
    In this paper, we analyze acoustic profiles of fillers (i.e. filled pauses, FPs) and laughter with the aim to automatically localize these nonverbal vocalizations in a stream of audio. Among other features, we use voice quality features to capture the distinctive production modes of laughter and spectral similarity measures to capture the stability of the oral tract that is characteristic for FPs. Classification experiments with Gaussian Mixture Models and various sets of features are performed. We find that Mel-Frequency Cepstrum Coefficients are performing relatively well in comparison to other features for both FPs and laughter. In order to address the large variation in the frame-wise decision scores (e.g., log-likelihood ratios) observed in sequences of frames we apply a median filter to these scores, which yields large performance improvements. Our analyses and results are presented within the framework of this year’s Interspeech Computational Paralinguistics sub-Challenge on Social Signals

    Laughter Classification Using Deep Rectifier Neural Networks with a Minimal Feature Subset

    Get PDF
    Laughter is one of the most important paralinguistic events, and it has specific roles in human conversation. The automatic detection of laughter occurrences in human speech can aid automatic speech recognition systems as well as some paralinguistic tasks such as emotion detection. In this study we apply Deep Neural Networks (DNN) for laughter detection, as this technology is nowadays considered state-of-the-art in similar tasks like phoneme identification. We carry out our experiments using two corpora containing spontaneous speech in two languages (Hungarian and English). Also, as we find it reasonable that not all frequency regions are required for efficient laughter detection, we will perform feature selection to find the sufficient feature subset

    Nevetések automatikus felismerése mély neurális hálók használatával

    Get PDF
    A nonverbális kommunikáció fontos szerepet játszik a beszéd megértésében. A beszédstílus függvényében a nonverbális jelzések típusa és előfordulása is változik. A spontán beszédben például az egyik leggyakoribb nonverbális jelzés a nevetés, amelynek számtalan kommunikációs funkciója van. A nevetések funkcióinak elemzése mellett megindultak a kutatások a nevetések automatikus felismerésére pusztán az akusztikai jelből [1,2,3,4,5,6]. Az utóbbi években a beszédfelismerés területén, a keretszintű fonémaosztályozás feladatában uralkodóvá vált a mély neurális hálók (DNN-ek) használata, melyek háttérbe szorították a korábban domináns GMM-eket [7,8,9]. Jelen kutatásban mély neurális hálókat alkalmazunk a nevetés keretszintű felismerésére. Kísérleteinket három jellemzőkészlettel folytatjuk: a GMM-ek esetében hagyományosnak számító MFCC és PLP jellemzők mellett alkalmazzuk az FBANK jellemzőkészletet, amely 40 Mel szűrősor energiáiból, illetve azok első- és másodrendű deriváltjaiból áll. Vizsgáljuk továbbá, hogy az egyes frekvenciasávok milyen mértékben segítenek a mély neurális hálónak a nevetést tartalmazó keretek azonosításában. Ezért a dolgozat második részében kísérletileg rangsoroljuk, hogy az egyes sávok mennyire járulnak hozzá a mély neurális háló pontosságának eléréséhez

    Feature extraction based on bio-inspired model for robust emotion recognition

    Get PDF
    Emotional state identification is an important issue to achieve more natural speech interactive systems. Ideally, these systems should also be able to work in real environments in which generally exist some kind of noise. Several bio-inspired representations have been applied to artificial systems for speech processing under noise conditions. In this work, an auditory signal representation is used to obtain a novel bio-inspired set of features for emotional speech signals. These characteristics, together with other spectral and prosodic features, are used for emotion recognition under noise conditions. Neural models were trained as classifiers and results were compared to the well-known mel-frequency cepstral coefficients. Results show that using the proposed representations, it is possible to significantly improve the robustness of an emotion recognition system. The results were also validated in a speaker independent scheme and with two emotional speech corpora.Fil: Albornoz, Enrique Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Rufiner, Hugo Leonardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin

    Detection of Verbal and Nonverbal speech features as markers of Depression: results of manual analysis and automatic classification

    Get PDF
    The present PhD project was the result of a multidisciplinary work involving psychiatrists, computing scientists, social signal processing experts and psychology students with the aim to analyse verbal and nonverbal behaviour in patients affected by Depression. Collaborations with several Clinical Health Centers were established for the collection of a group of patients suffering from depressive disorders. Moreover, a group of healthy controls was collected as well. A collaboration with the School of Computing Science of Glasgow University was established with the aim to analysed the collected data. Depression was selected for this study because is one of the most common mental disorder in the world (World Health Organization, 2017) associated with half of all suicides (Lecrubier, 2000). It requires prolonged and expensive medical treatments resulting into a significant burden for both patients and society (Olesen et al., 2012). The use of objective and reliable measurements of depressive symptoms can support the clinicians during the diagnosis reducing the risk of subjective biases and disorder misclassification (see discussion in Chapter 1) and doing the diagnosis in a quick and non-invasive way. Given this, the present PhD project proposes the investigation of verbal (i.e. speech content) and nonverbal (i.e. paralingiuistic features) behaviour in depressed patients to find several speech parameters that can be objective markers of depressive symptoms. The verbal and nonverbal behaviour are investigated through two kind of speech tasks: reading and spontaneous speech. Both manual features extraction and automatic classification approaches are used for this purpose. Differences between acute and remitted patients for prosodic and verbal features have been investigated as well. In addition, unlike other literature studies, in this project differences between subjects with and without Early Maladaptive Schema (EMS: Young et al., 2003) independently from the depressive symptoms, have been investigated with respect to both verbal and nonverbal behaviour. The proposed analysis shows that patients differ from healthy subjects for several verbal and nonverbal features. Moreover, using both reading and spontaneous speech, it is possible to automatically detect Depression with a good accuracy level (from 68 to 76%). These results demonstrate that the investigation of speech features can be a useful instrument, in addition to the current self-reports and clinical interviews, for helping the diagnosis of depressive disorders. Contrary to what was expected, patients in acute and remitted phase do not report differences regarding the nonverbal features and only few differences emerges for the verbal behaviour. At the same way, the automatic classification using paralinguistic features does not work well for the discrimination of subjects with and without EMS and only few differences between them have been found for the verbal behaviour. Possible explanations and limitations of these results will be discussed

    Paralinguistic event detection in children's speech

    Get PDF
    Paralinguistic events are useful indicators of the affective state of a speaker. These cues, in children's speech, are used to form social bonds with their caregivers. They have also been found to be useful in the very early detection of developmental disorders such as autism spectrum disorder (ASD) in children's speech. Prior work on children's speech has focused on the use of a limited number of subjects which don't have sufficient diversity in the type of vocalizations that are produced. Also, the features that are necessary to understand the production of paralinguistic events is not fully understood. To account for the lack of an off-the-shelf solution to detect instances of laughter and crying in children's speech, the focus of the thesis is to investigate and develop signal processing algorithms to extract acoustic features and use machine learning algorithms on various corpora. Results obtained using baseline spectral and prosodic features indicate the ability of the combination of spectral, prosodic, and dysphonation-related features that are needed to detect laughter and whining in toddlers' speech with different age groups and recording environments. The use of long-term features were found to be useful to capture the periodic properties of laughter in adults' and children's speech and detected instances of laughter to a high degree of accuracy. Finally, the thesis focuses on the use of multi-modal information using acoustic features and computer vision-based smile-related features to detect instances of laughter and to reduce the instances of false positives in adults' and children's speech. The fusion of the features resulted in an improvement of the accuracy and recall rates than when using either of the two modalities on their own.Ph.D
    corecore