2 research outputs found

    A Survey on Human Emotion Recognition Approaches, Databases and Applications

    Get PDF
    This paper presents the various emotion classification and recognition systems which implement methods aiming at improving Human Machine Interaction. The modalities and approaches used for affect detection vary and contribute to accuracy and efficacy in detecting emotions of human beings. This paper discovers them in a comparison and descriptive manner. Various applications that use the methodologies in different contexts to address the challenges in real time are discussed. This survey also describes the databases that can be used as standard data sets in the process of emotion identification. Thus an integrated discussion of methods, databases used and applications pertaining to the emerging field of Affective Computing (AC) is done and surveyed.This paper presents the various emotion classification and recognition systems which implement methods aiming at improving Human Machine Interaction. The modalities and approaches used for affect detection vary and contribute to accuracy and efficacy in detecting emotions of human beings. This paper discovers them in a comparison and descriptive manner. Various applications that use the methodologies in different contexts to address the challenges in real time are discussed. This survey also describes the databases that can be used as standard data sets in the process of emotion identification. Thus an integrated discussion of methods, databases used and applications pertaining to the emerging field of Affective Computing (AC) is done and surveyed

    Smart recognition and synthesis of emotional speech for embedded systems with natural user interfaces

    No full text
    The importance of the emotion information in human speech has been growing in recent years due to increasing use of natural user interfacing in embedded systems. Speech-based human-machine communication has the advantage of a high degree of usability, but it need not be limited to speech-to-text and text-to-speech capabilities. Emotion recognition in uttered speech has been considered in this research to integrate a speech recognizer/synthesizer with the capacity to recognize and synthesize emotion. This paper describes a complete framework for recognizing and synthesizing emotional speech based on smart logic (fuzzy logic and artificial neural networks). Time-domain signal-processing algorithms has been applied to reduce computational complexity at the feature-extraction level. A fuzzy-logic engine was modeled to make inferences about the emotional content of the uttered speech. An artificial neural network was modeled to synthesize emotive speech. Both were designed to be integrated into an embedded handheld device that implements a speech-based natural user interface (NUI)
    corecore