34,205 research outputs found

    Towards responsive Sensitive Artificial Listeners

    Get PDF
    This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness

    Employing Emotion Cues to Verify Speakers in Emotional Talking Environments

    Full text link
    Usually, people talk neutrally in environments where there are no abnormal talking conditions such as stress and emotion. Other emotional conditions that might affect people talking tone like happiness, anger, and sadness. Such emotions are directly affected by the patient health status. In neutral talking environments, speakers can be easily verified, however, in emotional talking environments, speakers cannot be easily verified as in neutral talking ones. Consequently, speaker verification systems do not perform well in emotional talking environments as they do in neutral talking environments. In this work, a two-stage approach has been employed and evaluated to improve speaker verification performance in emotional talking environments. This approach employs speaker emotion cues (text-independent and emotion-dependent speaker verification problem) based on both Hidden Markov Models (HMMs) and Suprasegmental Hidden Markov Models (SPHMMs) as classifiers. The approach is comprised of two cascaded stages that combines and integrates emotion recognizer and speaker recognizer into one recognizer. The architecture has been tested on two different and separate emotional speech databases: our collected database and Emotional Prosody Speech and Transcripts database. The results of this work show that the proposed approach gives promising results with a significant improvement over previous studies and other approaches such as emotion-independent speaker verification approach and emotion-dependent speaker verification approach based completely on HMMs.Comment: Journal of Intelligent Systems, Special Issue on Intelligent Healthcare Systems, De Gruyter, 201

    Feature extraction based on bio-inspired model for robust emotion recognition

    Get PDF
    Emotional state identification is an important issue to achieve more natural speech interactive systems. Ideally, these systems should also be able to work in real environments in which generally exist some kind of noise. Several bio-inspired representations have been applied to artificial systems for speech processing under noise conditions. In this work, an auditory signal representation is used to obtain a novel bio-inspired set of features for emotional speech signals. These characteristics, together with other spectral and prosodic features, are used for emotion recognition under noise conditions. Neural models were trained as classifiers and results were compared to the well-known mel-frequency cepstral coefficients. Results show that using the proposed representations, it is possible to significantly improve the robustness of an emotion recognition system. The results were also validated in a speaker independent scheme and with two emotional speech corpora.Fil: Albornoz, Enrique Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Rufiner, Hugo Leonardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin

    On the analysis of EEG power, frequency and asymmetry in Parkinson's disease during emotion processing

    Get PDF
    Objective: While Parkinson’s disease (PD) has traditionally been described as a movement disorder, there is growing evidence of disruption in emotion information processing associated with the disease. The aim of this study was to investigate whether there are specific electroencephalographic (EEG) characteristics that discriminate PD patients and normal controls during emotion information processing. Method: EEG recordings from 14 scalp sites were collected from 20 PD patients and 30 age-matched normal controls. Multimodal (audio-visual) stimuli were presented to evoke specific targeted emotional states such as happiness, sadness, fear, anger, surprise and disgust. Absolute and relative power, frequency and asymmetry measures derived from spectrally analyzed EEGs were subjected to repeated ANOVA measures for group comparisons as well as to discriminate function analysis to examine their utility as classification indices. In addition, subjective ratings were obtained for the used emotional stimuli. Results: Behaviorally, PD patients showed no impairments in emotion recognition as measured by subjective ratings. Compared with normal controls, PD patients evidenced smaller overall relative delta, theta, alpha and beta power, and at bilateral anterior regions smaller absolute theta, alpha, and beta power and higher mean total spectrum frequency across different emotional states. Inter-hemispheric theta, alpha, and beta power asymmetry index differences were noted, with controls exhibiting greater right than left hemisphere activation. Whereas intra-hemispheric alpha power asymmetry reduction was exhibited in patients bilaterally at all regions. Discriminant analysis correctly classified 95.0% of the patients and controls during emotional stimuli. Conclusion: These distributed spectral powers in different frequency bands might provide meaningful information about emotional processing in PD patients
    corecore