1,925 research outputs found

    Fundamental frequency in Estonian emotional read-out speech

    Get PDF
    Fundamental frequency (F0, perceived as pitch) is an important prosodic cue of emotion. The aim of the present study was to find out if sentence emotion has any influence detectable in the F0 height and range of Estonian read-out speech. Thus the F0 of each vowel found in Estonian read-out sentences was measured, and its median for three emotions (anger, joy, sadness) and for neutral speech was calculated. In addition, the F0 range was measured for emotional and neutral speech, as well as the height of F0 for sentence-initial and sentence-final positions. The results revealed that in the investigated material, F0 was highest for joy and lowest for anger. The F0 range, however, was widest for anger and narrowest for sadness. The differences in F0 height at the beginning versus the end of sentences were not statistically significant, either for pairs of emotions or for emotions compared with neutral speech

    Emotion recognition from speech: tools and challenges

    Get PDF
    Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more “natural” than the acted datasets where the subjects attempt to suppress all but one emotion. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    On automatic emotion classification using acoustic features

    No full text
    In this thesis, we describe extensive experiments on the classification of emotions from speech using acoustic features. This area of research has important applications in human computer interaction. We have thoroughly reviewed the current literature and present our results on some of the contemporary emotional speech databases. The principal focus is on creating a large set of acoustic features, descriptive of different emotional states and finding methods for selecting a subset of best performing features by using feature selection methods. In this thesis we have looked at several traditional feature selection methods and propose a novel scheme which employs a preferential Borda voting strategy for ranking features. The comparative results show that our proposed scheme can strike a balance between accurate but computationally intensive wrapper methods and less accurate but computationally less intensive filter methods for feature selection. By using the selected features, several schemes for extending the binary classifiers to multiclass classification are tested. Some of these classifiers form serial combinations of binary classifiers while others use a hierarchical structure to perform this task. We describe a new hierarchical classification scheme, which we call Data-Driven Dimensional Emotion Classification (3DEC), whose decision hierarchy is based on non-metric multidimensional scaling (NMDS) of the data. This method of creating a hierarchical structure for the classification of emotion classes gives significant improvements over other methods tested. The NMDS representation of emotional speech data can be interpreted in terms of the well-known valence-arousal model of emotion. We find that this model does not givea particularly good fit to the data: although the arousal dimension can be identified easily, valence is not well represented in the transformed data. From the recognitionresults on these two dimensions, we conclude that valence and arousal dimensions are not orthogonal to each other. In the last part of this thesis, we deal with the very difficult but important topic of improving the generalisation capabilities of speech emotion recognition (SER) systems over different speakers and recording environments. This topic has been generally overlooked in the current research in this area. First we try the traditional methods used in automatic speech recognition (ASR) systems for improving the generalisation of SER in intra– and inter–database emotion classification. These traditional methods do improve the average accuracy of the emotion classifier. In this thesis, we identify these differences in the training and test data, due to speakers and acoustic environments, as a covariate shift. This shift is minimised by using importance weighting algorithms from the emerging field of transfer learning to guide the learning algorithm towards that training data which gives better representation of testing data. Our results show that importance weighting algorithms can be used to minimise the differences between the training and testing data. We also test the effectiveness of importance weighting algorithms on inter–database and cross-lingual emotion recognition. From these results, we draw conclusions about the universal nature of emotions across different languages

    Detecting emotions from speech using machine learning techniques

    Get PDF
    D.Phil. (Electronic Engineering

    A cross-cultural investigation of the vocal correlates of emotion

    Get PDF
    PhD ThesisUniversal and culture-specific properties of the vocal communication of human emotion are investigated in this balanced study focussing on encoding and decoding of Happy, Sad, Angry, Fearful and Calm by English and Japanese participants (eight female encoders for each culture, and eight female and eight male decoders for each culture). Previous methodologies and findings are compared. This investigation is novel in the design of symmetrical procedures to facilitate cross-cultural comparison of results of decoding tests and acoustic analysis; a simulation/self-induction method was used in which participants from both cultures produced, as far as possible, the same pseudo-utterances. All emotions were distinguished beyond chance irrespective of culture, except for Japanese participants’ decoding of English Fearful, which was decoded at a level borderline with chance. Angry and Sad were well-recognised, both in-group and cross-culturally and Happy was identified well in-group. Confusions between emotions tended to follow dimensional lines of arousal or valence. Acoustic analysis found significant distinctions between all emotions for each culture, except between the two low arousal emotions Sad and Calm. Evidence of ‘In-Group Advantage’ was found for English decoding of Happy, Fearful and Calm and for Japanese decoding of Happy; there is support for previous evidence of East/West cultural differences in display rules. A novel concept is suggested for the finding that Japanese decoders identified Happy, Sad and Angry more reliably from English than from Japanese expressions. Whilst duration, fundamental frequency and intensity all contributed to distinctions between emotions for English, only measures of fundamental frequency were found to significantly distinguish emotions in Japanese. Acoustic cues tended to be less salient in Japanese than in English when compared to expected cues for high and low arousal emotions. In addition, new evidence was found of cross-cultural influence of vowel quality upon emotion recognition

    Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based Features Extracted in Different Acoustic Conditions

    Get PDF
    ABSTRACT: In the last years, there has a great progress in automatic speech recognition. The challenge now it is not only recognize the semantic content in the speech but also the called "paralinguistic" aspects of the speech, including the emotions, and the personality of the speaker. This research work aims in the development of a methodology for the automatic emotion recognition from speech signals in non-controlled noise conditions. For that purpose, different sets of acoustic, non-linear, and wavelet based features are used to characterize emotions in different databases created for such purpose

    Põhiemotsioonid eestikeelses etteloetud kõnes: akustiline analüüs ja modelleerimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneDoktoritööl oli kaks eesmärki: saada teada, milline on kolme põhiemotsiooni – rõõmu, kurbuse ja viha – akustiline väljendumine eestikeelses etteloetud kõnes, ning luua neile uurimistulemustele tuginedes eestikeelsele kõnesüntesaatorile parameetrilise sünteesi jaoks emotsionaalse kõne akustilised mudelid, mis aitaksid süntesaatoril äratuntavalt nimetatud emotsioone väljendada. Kuna sünteeskõnet rakendatakse paljudes valdkondades, näiteks inimese ja masina suhtluses, multimeedias või puuetega inimeste abivahendites, siis on väga oluline, et sünteeskõne kõlaks loomulikuna, võimalikult inimese rääkimise moodi. Üks viis sünteeskõne loomulikumaks muuta on lisada sellesse emotsioone, tehes seda mudelite abil, mis annavad süntesaatorile ette emotsioonide väljendamiseks vajalikud akustiliste parameetrite väärtuste kombinatsioonid. Emotsionaalse kõne mudelite loomiseks peab teadma, kuidas emotsioonid inimkõnes hääleliselt väljenduvad. Selleks tuli uurida, kas, millisel määral ja mis suunas emotsioonid akustiliste parameetrite (näiteks põhitooni, intensiivsuse ja kõnetempo) väärtusi mõjutavad ning millised parameetrid võimaldavad emotsioone üksteisest ja neutraalsest kõnest eristada. Saadud tulemuste põhjal oli võimalik luua emotsioonide akustilisi mudeleid* ning katseisikud hindasid, milliste mudelite järgi on emotsioonid sünteeskõnes äratuntavad. Eksperiment kinnitas, et akustikaanalüüsi tulemustele tuginevate mudelitega suudab eestikeelne kõnesüntesaator rahuldavalt väljendada nii kurbust kui ka viha, kuid mitte rõõmu. Doktoritöö kajastab üht võimalikku viisi, kuidas rõõm, kurbus ja viha eestikeelses kõnes hääleliselt väljenduvad, ning esitab mudelid, mille abil emotsioone eestikeelsesse sünteeskõnesse lisada. Uurimistöö on lähtepunkt edasisele eestikeelse emotsionaalse sünteeskõne akustiliste mudelite arendamisele. * Katsemudelite järgi sünteesitud emotsionaalset kõnet saab kuulata aadressil https://www.eki.ee/heli/index.php?option=com_content&view=article&id=7&Itemid=494.The present doctoral dissertation had two major purposes: (a) to find out and describe the acoustic expression of three basic emotions – joy, sadness and anger – in read Estonian speech, and (b) to create, based on the resulting description, acoustic models of emotional speech, designed to help parametric synthesis of Estonian speech recognizably express the above emotions. As far as synthetic speech has many applications in different fields, such as human-machine interaction, multimedia, or aids for the disabled, it is vital that the synthetic speech should sound natural, that is, as human-like as possible. One of the ways to naturalness lies through adding emotions to the synthetic speech by means of models feeding the synthesiser with combinations of acoustic parametric values necessary for emotional expression. In order to create such models of emotional speech, it is first necessary to have a detailed knowledge of the vocal expression of emotions in human speech. For that purpose I had to investigate to what extent, if any, and in what direction emotions influence the values of speech acoustic parameters (e.g., fundamental frequency, intensity and speech rate), and which parameters enable discrimination of emotions from each other and from neutral speech. The results provided material for creating acoustic models of emotions* to be presented to evaluators, who were asked to decide which of the models helped to produce synthetic speech with recognisable emotions. The experiment proved that with models based on acoustic results, an Estonian speech synthesiser can satisfactorily express sadness and anger, while joy was not so well recognised by listeners. This doctoral dissertation describes one of the possible ways for the vocal expression of joy, sadness and anger in Estonian speech and presents some models enabling addition of emotions to Estonian synthetic speech. The study serves as a starting point for the future development of acoustic models for Estonian emotional synthetic speech. * Recorded examples of emotional speech synthesised using the test models can be accessed at https://www.eki.ee/heli/index.php?option=com_content&view=article&id=7&Itemid=494
    corecore