147 research outputs found

    Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema

    No full text
    In this paper, a psychologically-inspired binary cascade classification schema is proposed for speech emotion recognition. Performance is enhanced because commonly confused pairs of emotions are distinguishable from one another. Extracted features are related to statistics of pitch, formants, and energy contours, as well as spectrum, cepstrum, perceptual and temporal features, autocorrelation, MPEG-7 descriptors, Fujisakis model parameters, voice quality, jitter, and shimmer. Selected features are fed as input to K nearest neighborhood classifier and to support vector machines. Two kernels are tested for the latter: Linear and Gaussian radial basis function. The recently proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately. The best emotion recognition accuracy, achieved by support vector machines with linear kernel, equals 87.7%, outperforming state-of-the-art approaches. Statistical analysis is first carried out with respect to the classifiers error rates and then to evaluate the information expressed by the classifiers confusion matrices. © Springer Science+Business Media, LLC 2011

    REAL-TIME ANGER DETECTION IN ARABIC SPEECH DIALOGS

    Get PDF

    REAL-TIME ANGER DETECTION IN ARABIC SPEECH DIALOGS

    Get PDF

    Design of Emotion Recognition System

    Get PDF
    The chapter deals with a speech emotion recognition system as a complex solution including a Czech speech database of emotion samples in a form of short sound records and the tool evaluating database samples by using subjective methods. The chapter also involves individual components of an emotion recognition system and shortly describes their functions. In order to create the database of emotion samples for learning and training of emotional classifier, it was necessary to extract short sound recordings from radio and TV broadcastings. In the second step, all records in emotion database were evaluated using our designed evaluation tool and results were automatically evaluated how they are credible and reliable and how they represent different states of emotions. As a result, three final databases were formed. The chapter also describes the idea of new potential model of a complex emotion recognition system as a whole unit

    Affect and Social Processes in Online Communication- Experiments with an Affective Dialog System

    Get PDF
    Abstract—This paper presents an integrated view on a series of experiments conducted with an affective dialog system, applied as a tool in studies of emotions and social processes in online communication. The different realizations of the system are evaluated in three experimental setups in order to verify effects of affective profiles, as well as of fine-grained communication scenarios on users’ expressions of affective states, experienced emotional changes, and interaction patterns. Results demonstrate that the system applied in virtual reality settings matches a Wizard-of-Oz in terms of chatting enjoyment, dialog coherence and realism. Variants of the system’s affective profile significantly influence the rating of chatting enjoyment and an emotional connection. Self-reported emotional changes experienced by participants during an interaction with the system are in line with the type of applied profile. Analysis of interaction patterns, i.e., usage of particular dialog act classes, word categories, and textual expressions of affective states for different scenarios, demonstrates that a communication scenario for social sharing of emotions was successfully established. The experimental evidence provides valuable input for applications of affective dialog systems and strengthens them as valid tools for studying affect and social aspects in online communication. Index Terms—Affective dialog system, human-computer interaction, affect sensing and analysis, structuring affective interactions.

    The eyes have it

    Get PDF

    Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge

    Get PDF
    More than a decade has passed since research on automatic recognition of emotion from speech has become a new field of research in line with its 'big brothers' speech and speaker recognition. This article attempts to provide a short overview on where we are today, how we got there and what this can reveal us on where to go next and how we could arrive there. In a first part, we address the basic phenomenon reflecting the last fifteen years, commenting on databases, modelling and annotation, the unit of analysis and prototypicality. We then shift to automatic processing including discussions on features, classification, robustness, evaluation, and implementation and system integration. From there we go to the first comparative challenge on emotion recognition from speech-the INTERSPEECH 2009 Emotion Challenge, organised by (part of) the authors, including the description of the Challenge's database, Sub-Challenges, participants and their approaches, the winners, and the fusion of results to the actual learnt lessons before we finally address the ever-lasting problems and future promising attempts. (C) 2011 Elsevier B.V. All rights reserved.Schuller B., Batliner A., Steidl S., Seppi D., ''Recognising realistic emotions and affect in speech: state of the art and lessons learnt from the first challenge'', Speech communication, vol. 53, no. 9-10, pp. 1062-1087, November 2011.status: publishe

    The eyes have it

    Get PDF
    corecore