2 research outputs found

    Optimization of automatic speech emotion recognition systems

    Get PDF
    Osnov za uspešnu integraciju emocionalne inteligencije u sofisticirane sisteme veštačke inteligencije jeste pouzdano prepoznavanje emocionalnih stanja, pri čemu se paralingvistički sadržaj govora izdvaja kao posebno značajan nosilac informacija o emocionalnom stanju govornika. U ovom radu je sprovedena komparativna analiza obeležja govornog signala i klasifikatorskih metoda najčešće korišćenih u rešavanju zadatka automatskog prepoznavanja emocionalnih stanja govornika, a zatim su razmotrene mogućnosti popravke performansi sistema za automatsko prepoznavanje govornih emocija. Izvršeno je unapređenje diskretnih skrivenih Markovljevih modela upotrebom QQ krive za potrebe određivanja etalona vektorske kvantizacije, a razmotrena su i dodatna unapređenja modela. Ispitane su mogućnosti vernije reprezentacije govornog signala, pri čemu je analiza proširena na veliki broj obeležja iz različitih grupa. Formiranje velikih skupova obeležja nameće potrebu za redukcijom dimenzija, gde je pored poznatih metoda analizirana i alternativna metoda zasnovana na Fibonačijevom nizu brojeva. Na kraju su razmotrene mogućnosti integracije prednosti različitih pristupa u jedinstven sistem za automatsko prepoznavanje govornih emocija, tako da je predložena paralelna multiklasifikatorska struktura sa kombinatornim pravilom koje pored rezultata klasifikacije pojedinačnih klasifikatora ansambla koristi i informacije o karakteristikama klasifikatora. Takođe, dat je predlog automatskog formiranja ansambla klasifikatora proizvoljne veličine upotrebom redukcije dimenzija zasnovane na Fibonačijevom nizu brojevaThe basis for the successful integration of emotional intelligence into sophisticated systems of artificial intelligence is the reliable recognition of emotional states, with the paralinguistic content of speech standing out as a particularly significant carrier of information regarding the emotional state of the speaker. In this paper, a comparative analysis of speech signal features and classification methods most often used for solving the task of automatic recognition of speakers' emotional states is performed, after which the possibilities for improving the performances of the systems for automatic recognition of speech emotions are considered. Discrete hidden Markov models were improved using the QQ plot for the purpose of determining the codevectors for vector quantization, and additional models improvements were also considered. The possibilities for a more faithful representation of the speech signal were examined, whereby the analysis was extended to a large number of features from different groups. The formation of big sets of features imposes the need for dimensionality reduction, where an alternative method based on the Fibonacci sequence of numbers was analyzed, alongside known methods. Finally, the possibilities for integrating the advantages of different approaches into a single system for automatic recognition of speech emotions are considered, so that a parallel multiclassifier structure is proposed with a combinatorial rule, which, in addition to the classification results of individual ensemble classifiers, uses information about classifiers' characteristics. A proposal is also given for the automatic formation of an ensemble of classifiers of arbitrary size by using dimensionality reduction based on the Fibonacci sequence of numbers

    A Novel Hierarchical Speech Emotion Recognition Method Based on Improved DDAGSVM ∗

    No full text
    Abstract. In order to improve the recognition accuracy of speech emotion recognition, in this paper, a novel hierarchical method based on improved Decision Directed Acyclic Graph SVM (improved DDAGSVM) is proposed for speech emotion recognition. The improved DDAGSVM is constructed according to the confusion degrees of emotion pairs. In addition, a geodesic distance-based testing algorithm is proposed for the improved DDAGSVM to give the test samples differently distinguished many decision chances. Informative features and SVM optimized parameters used in each node of the improved DDAGSVM are gotten by Genetic Algorithm (GA) synchronously. On the Chinese Speech Emotion Database (CSED) and the Audio-Video Emotion Database (AVED) recorded by our workgroup, the recognition experiment results reveal that, compared with multi-SVM, binary decision tree and traditional DDAGSVM, the improved DDAGSVM has the higher recognition accuracy with few selected informative features and moderate time for 7 emotions
    corecore