11,435 research outputs found

    Investigating spoken emotion : the interplay of language and facial expression

    Get PDF
    This thesis aims to investigate how spoken expressions of emotions are influenced by the characteristics of spoken language and the facial emotion expression. The first three chapters examined how production and perception of emotions differed between Cantonese (tone language) and English (non-tone language). The rationale for this contrast was that the acoustic property of Fundamental Frequency (F0) may be used differently in the production and perception of spoken expressions in tone languages as F0 may be preserved as a linguistic resource for the production of lexical tones. To test this idea, I first developed the Cantonese Audio-visual Emotional Speech (CAVES) database, which was then used as stimuli in all the studies presented in this thesis (Chapter 1). An emotion perception study was then conducted to examine how three groups of participants (Australian English, Malaysian Malay and Hong Kong Cantonese speakers) identified spoken expression of emotions that were produced in either English or Cantonese (Chapter 2). As one of the aims of this study was to disambiguate the effects of language from culture, these participants were selected on the basis that they either shared similarities in language type (non-tone language, Malay and English) or culture (collectivist culture, Cantonese and Malay). The results showed that a greater similarity in emotion perception was observed between those who spoke a similar type of language, as opposed to those who shared a similar culture. This suggests some intergroup differences in emotion perception may be attributable to cross-language differences. Following up on these findings, an acoustic analysis study (Chapter 3) showed that compared to English spoken expression of emotions, Cantonese expressions had less F0 related cues (median and flatter F0 contour) and also the use of F0 cues was different. Taken together, these results show that language characteristics (n F0 usage) interact with the production and perception of spoken expression of emotions. The expression of disgust was used to investigate how facial expressions of emotions affect speech articulation. The rationale for selecting disgust was that the facial expression of disgust involves changes to the mouth region such as closure and retraction of the lips, and these changes are likely to have an impact on speech articulation. To test this idea, an automatic lip segmentation and measurement algorithm was developed to quantify the configuration of the lips from images (Chapter 5). By comparing neutral to disgust expressive speech, the results showed that disgust expressive speech is produced with significantly smaller vertical mouth opening, greater horizontal mouth opening and lower first and second formant frequencies (F1 and F2). Overall, this thesis provides an insight into how aspects of expressive speech may be shaped by specific (language type) and universal (face emotion expression) factors

    Machine learning and deep learning for emotion recognition

    Get PDF
    Ús de diferents tècniques de deep learning per al reconeixement d'emocions a partir d'imatges i videos. Les diferents tècniques s'apliquen, es valoren i comparen amb l'objectiu de fer-les servir conjuntament en una aplicació final.Outgoin

    Toward an affect-sensitive multimodal human-computer interaction

    No full text
    The ability to recognize affective states of a person... This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence -- the ability to recognize a user's affective states -- in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI -- an automatic personalized analyzer of a user's nonverbal affective feedback

    Multimodal emotion recognition

    Get PDF
    Reading emotions from facial expression and speech is a milestone in Human-Computer Interaction. Recent sensing technologies, namely the Microsoft Kinect Sensor, provide basic input modalities data, such as RGB imaging, depth imaging and speech, that can be used in Emotion Recognition. Moreover Kinect can track a face in real time and present the face fiducial points, as well as 6 basic Action Units (AUs). In this work we explore this information by gathering a new and exclusive dataset. This is a new opportunity for the academic community as well to the progress of the emotion recognition problem. The database includes RGB, depth, audio, fiducial points and AUs for 18 volunteers for 7 emotions. We then present automatic emotion classification results on this dataset by employing k-Nearest Neighbor, Support Vector Machines and Neural Networks classifiers, with unimodal and multimodal approaches. Our conclusions show that multimodal approaches can attain better results.Ler e reconhecer emoções de expressões faciais e verbais é um marco na Interacção Humana com um Computador. As recentes tecnologias de deteção, nomeadamente o sensor Microsoft Kinect, recolhem dados de modalidades básicas como imagens RGB, de informaçãode profundidade e defala que podem ser usados em reconhecimento de emoções. Mais ainda, o sensor Kinect consegue reconhecer e seguir uma cara em tempo real e apresentar os pontos fiduciais, assim como as 6 AUs – Action Units básicas. Neste trabalho exploramos esta informação através da compilação de um dataset único e exclusivo que representa uma oportunidade para a comunidade académica e para o progresso do problema do reconhecimento de emoções. Este dataset inclui dados RGB, de profundidade, de fala, pontos fiduciais e AUs, para 18 voluntários e 7 emoções. Apresentamos resultados com a classificação automática de emoções com este dataset, usando classificadores k-vizinhos próximos, máquinas de suporte de vetoreseredes neuronais, em abordagens multimodais e unimodais. As nossas conclusões indicam que abordagens multimodais permitem obter melhores resultados

    Recognizing emotions through facial expressions: a largescale experimental study

    Get PDF
    Experimental research examining emotional processes is typically based on the observation of images with affective content, including facial expressions. Future studies will benefit from databases with emotion-inducing stimuli in which characteristics of the stimuli potentially influencing results can be controlled. This study presents Portuguese normative data for the identification of seven facial expressions of emotions (plus a neutral face), on the Radboud Faces Database (RaFD). The effect of participants' gender and models' sex on emotion recognition was also examined. Participants (N = 1249) were exposed to 312 pictures of white adults displaying emotional and neutral faces with a frontal gaze. Recognition agreement between the displayed and participants' chosen expressions ranged from 69% (for anger) to 97% (for happiness). Recognition levels were significantly higher among women than among men only for anger and contempt. The emotion recognition was higher either in female models or in male models depending on the emotion. Overall, the results show high recognition levels of the facial expressions presented, indicating that the RaFD provides adequate stimuli for studies examining the recognition of facial expressions of emotion among college students. Participants' gender had a limited influence on emotion recognition, but the sex of the model requires additional consideration

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes
    • …
    corecore