723 research outputs found

    Emotion Recognition from Acted and Spontaneous Speech

    Get PDF
    Dizertační práce se zabývá rozpoznáním emočního stavu mluvčích z řečového signálu. Práce je rozdělena do dvou hlavních častí, první část popisuju navržené metody pro rozpoznání emočního stavu z hraných databází. V rámci této části jsou představeny výsledky rozpoznání použitím dvou různých databází s různými jazyky. Hlavními přínosy této části je detailní analýza rozsáhlé škály různých příznaků získaných z řečového signálu, návrh nových klasifikačních architektur jako je například „emoční párování“ a návrh nové metody pro mapování diskrétních emočních stavů do dvou dimenzionálního prostoru. Druhá část se zabývá rozpoznáním emočních stavů z databáze spontánní řeči, která byla získána ze záznamů hovorů z reálných call center. Poznatky z analýzy a návrhu metod rozpoznání z hrané řeči byly využity pro návrh nového systému pro rozpoznání sedmi spontánních emočních stavů. Jádrem navrženého přístupu je komplexní klasifikační architektura založena na fúzi různých systémů. Práce se dále zabývá vlivem emočního stavu mluvčího na úspěšnosti rozpoznání pohlaví a návrhem systému pro automatickou detekci úspěšných hovorů v call centrech na základě analýzy parametrů dialogu mezi účastníky telefonních hovorů.Doctoral thesis deals with emotion recognition from speech signals. The thesis is divided into two main parts; the first part describes proposed approaches for emotion recognition using two different multilingual databases of acted emotional speech. The main contributions of this part are detailed analysis of a big set of acoustic features, new classification schemes for vocal emotion recognition such as “emotion coupling” and new method for mapping discrete emotions into two-dimensional space. The second part of this thesis is devoted to emotion recognition using multilingual databases of spontaneous emotional speech, which is based on telephone records obtained from real call centers. The knowledge gained from experiments with emotion recognition from acted speech was exploited to design a new approach for classifying seven emotional states. The core of the proposed approach is a complex classification architecture based on the fusion of different systems. The thesis also examines the influence of speaker’s emotional state on gender recognition performance and proposes system for automatic identification of successful phone calls in call center by means of dialogue features.

    How major depressive disorder affects the ability to decode multimodal dynamic emotional stimuli

    Get PDF
    Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions) which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by Recurrent Major Depressive Disorder (RMDDs) using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with Adjustment Disorder with Depressed Mood (ADs) and healthy (HCs) subjects. The experiments involve 27 RMDDs (16 with acute depression - RMDD-A, and 11 in a compensation phase - RMDD-C), 16 ADs and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video), video (without audio) and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy

    Determination of Formant Features in Czech and Slovak for GMM Emotional Speech Classifier

    Get PDF
    The paper is aimed at determination of formant features (FF) which describe vocal tract characteristics. It comprises analysis of the first three formant positions together with their bandwidths and the formant tilts. Subsequently, the statistical evaluation and comparison of the FF was performed. This experiment was realized with the speech material in the form of sentences of male and female speakers expressing four emotional states (joy, sadness, anger, and a neutral state) in Czech and Slovak languages. The statistical distribution of the analyzed formant frequencies and formant tilts shows good differentiation between neutral and emotional styles for both voices. Contrary to it, the values of the formant 3-dB bandwidths have no correlation with the type of the speaking style or the type of the voice. These spectral parameters together with the values of the other speech characteristics were used in the feature vector for Gaussian mixture models (GMM) emotional speech style classifier that is currently developed. The overall mean classification error rate achieves about 18 %, and the best obtained error rate is 5 % for the sadness style of the female voice. These values are acceptable in this first stage of development of the GMM classifier that should be used for evaluation of the synthetic speech quality after applied voice conversion and emotional speech style transformation

    Development of Multimodal Interfaces: Active Listening and Synchrony

    Get PDF

    Effect of attachment and personality styles on the ability to interpret emotional vocal expressions: A cross-sectional study

    Get PDF
    Background: Taking attachment as its theoretical reference, the post-rationalist approach within cognitive theory has outlined two basic categories of the regulation of cognitive and emotional processes: the outward and inward personality orientations. Research on the role of attachment style in individuals' ability to decode emotions has never considered inward and outward orientations. Objective: This cross-sectional study was conducted to compare individuals with different attachment styles and different inward/outward personality organizations on their ability to decode vocal emotions. Methods: After being assessed for attachment and personality styles, a sample of university students performed an emotional-decoding task, and their accuracy (Study 1) and reaction time (Study 2) was measured. Gender effects were also examined. Results: No significant differences in emotion decoding accuracy emerged among individuals with either secure or insecure attachment styles and either inward or outward personality orientations. Both secure and inward individuals were significantly faster than insecure and outward ones in decoding vocal expressions of joy, whereas securely attached individuals were faster than insecure ones in decoding vocal expressions of anger. Conclusion: Considering that the recognition of emotion falls within the basic skills upon which typical social interactions are based, the findings can be useful to enhance the comprehension of personality-related factors involved in the context of daily social interactions

    Undergraduate Catalogue 2002-2003

    Get PDF
    https://scholarship.shu.edu/undergraduate_catalogues/1007/thumbnail.jp

    Undergraduate Catalogue 2000-2001

    Get PDF
    https://scholarship.shu.edu/undergraduate_catalogues/1041/thumbnail.jp

    Undergraduate Catalogue 2001-2002

    Get PDF
    https://scholarship.shu.edu/undergraduate_catalogues/1040/thumbnail.jp

    Undergraduate Catalogue 2003-2004

    Get PDF
    https://scholarship.shu.edu/undergraduate_catalogues/1074/thumbnail.jp

    Undergraduate Catalogue 1992-1993

    Get PDF
    https://scholarship.shu.edu/undergraduate_catalogues/1049/thumbnail.jp
    corecore