6,177 research outputs found

    Feature extraction based on bio-inspired model for robust emotion recognition

    Get PDF
    Emotional state identification is an important issue to achieve more natural speech interactive systems. Ideally, these systems should also be able to work in real environments in which generally exist some kind of noise. Several bio-inspired representations have been applied to artificial systems for speech processing under noise conditions. In this work, an auditory signal representation is used to obtain a novel bio-inspired set of features for emotional speech signals. These characteristics, together with other spectral and prosodic features, are used for emotion recognition under noise conditions. Neural models were trained as classifiers and results were compared to the well-known mel-frequency cepstral coefficients. Results show that using the proposed representations, it is possible to significantly improve the robustness of an emotion recognition system. The results were also validated in a speaker independent scheme and with two emotional speech corpora.Fil: Albornoz, Enrique Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Rufiner, Hugo Leonardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin

    Automatic Measurement of Affect in Dimensional and Continuous Spaces: Why, What, and How?

    Get PDF
    This paper aims to give a brief overview of the current state-of-the-art in automatic measurement of affect signals in dimensional and continuous spaces (a continuous scale from -1 to +1) by seeking answers to the following questions: i) why has the field shifted towards dimensional and continuous interpretations of affective displays recorded in real-world settings? ii) what are the affect dimensions used, and the affect signals measured? and iii) how has the current automatic measurement technology been developed, and how can we advance the field

    Lokalno diskriminantna projekcija difuzije i njena primjena za prepoznavanje emocionalnog stanja iz govornog signala

    Get PDF
    The existing Diffusion Maps method brings diffusion to data samples by Markov random walk. In this paper, to provide a general solution form of Diffusion Maps, first, we propose the generalized single-graph-diffusion embedding framework on the basis of graph embedding framework. Second, by designing the embedding graph of the framework, an algorithm, namely Locally Discriminant Diffusion Projection (LDDP), is proposed for speech emotion recognition. This algorithm is the projection form of the improved Diffusion Maps, which includes both discriminant information and local information. The linear or kernelized form of LDDP (i.e., LLDDP or KLDDP) is used to achieve the dimensionality reduction of original speech emotion features. We validate the proposed algorithm on two widely used speech emotion databases, EMO-DB and eNTERFACE\u2705. The experimental results show that the proposed LDDP methods, including LLDDP and KLDDP, outperform some other state-of-the-art dimensionality reduction methods which are based on graph embedding or discriminant analysis.Postojeće metode mapiranja difuzije u uzorke podataka primjenjuju Markovljevu slučajnu šetnju. U ovom radu, kako bismo pružili općenito rješenje za mapiranje difuzije, prvo predlažemo generalizirano okruženje za difuziju jednog grafa, zasnovano na okruženju za primjenu grafova. Drugo, konstruirajući ugrađeni graf, predlažemo algoritam lokalno diskriminantne projekcije difuzije (LDDP) za prepoznavanje emocionalnog stanja iz govornog signala. Ovaj algoritam je projekcija poboljšane difuzijske mape koja uključuje diskriminantnu i lokalnu informaciju. Linearna ili jezgrovita formulacija LDDP-a (i.e., LLDDP ili KLDDP) koristi se u svrhu redukcije dimenzionalnosti originalnog skupa značajki za prepoznavanje emocionalnog stanja iz govornog signala. Predloženi algoritam testiran je nad dvama široko korištenim bazama podataka za prepoznavanje emocionalnog stanja iz govornog signala, EMO-DB i eNTERFACE\u2705. Eksperimentalni rezultati pokazuju kako predložena LDDP metoda, uključujući LLDDP i KLDDP, pokazuje bolje ponašanje od nekih drugih najsuvremenijih metoda redukcije dimenzionalnosti, zasnovanim na ugrađenim grafovima ili analizi diskriminantnosti

    Recognising facial expressions in video sequences

    Full text link
    We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real-time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated to facial expressions are represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold in order to compute a posterior probability associated to a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89\% recognition rate in a set of 333 sequences from the Cohn-Kanade data base
    corecore