38 research outputs found
Multi-Sensory Emotion Recognition with Speech and Facial Expression
Emotion plays an important role in human beings’ daily lives. Understanding emotions and recognizing how to react to others’ feelings are fundamental to engaging in successful social interactions. Currently, emotion recognition is not only significant in human beings’ daily lives, but also a hot topic in academic research, as new techniques such as emotion recognition from speech context inspires us as to how emotions are related to the content we are uttering.
The demand and importance of emotion recognition have highly increased in many applications in recent years, such as video games, human computer interaction, cognitive computing, and affective computing. Emotion recognition can be done from many sources including text, speech, hand, and body gesture as well as facial expression. Presently, most of the emotion recognition methods only use one of these sources. The emotion of human beings changes every second and using a single way to process the emotion recognition may not reflect the emotion correctly. This research is motivated by the desire to understand and evaluate human beings’ emotion from multiple ways such as speech and facial expressions.
In this dissertation, multi-sensory emotion recognition has been exploited. The proposed framework can recognize emotion from speech, facial expression, and both of them. There are three important parts in the design of the system: the facial emotion recognizer, the speech emotion recognizer, and the information fusion. The information fusion part uses the results from the speech emotion recognition and facial emotion recognition. Then, a novel weighted method is used to integrate the results, and a final decision of the emotion is given after the fusion.
The experiments show that with the weighted fusion methods, the accuracy can be improved to an average of 3.66% compared to fusion without adding weight. The improvement of the recognition rate can reach 18.27% and 5.66% compared to the speech emotion recognition and facial expression recognition, respectively. By improving the emotion recognition accuracy, the proposed multi-sensory emotion recognition system can help to improve the naturalness of human computer interaction
Multi-Sensory Emotion Recognition with Speech and Facial Expression
Emotion plays an important role in human beings’ daily lives. Understanding emotions and recognizing how to react to others’ feelings are fundamental to engaging in successful social interactions. Currently, emotion recognition is not only significant in human beings’ daily lives, but also a hot topic in academic research, as new techniques such as emotion recognition from speech context inspires us as to how emotions are related to the content we are uttering.
The demand and importance of emotion recognition have highly increased in many applications in recent years, such as video games, human computer interaction, cognitive computing, and affective computing. Emotion recognition can be done from many sources including text, speech, hand, and body gesture as well as facial expression. Presently, most of the emotion recognition methods only use one of these sources. The emotion of human beings changes every second and using a single way to process the emotion recognition may not reflect the emotion correctly. This research is motivated by the desire to understand and evaluate human beings’ emotion from multiple ways such as speech and facial expressions.
In this dissertation, multi-sensory emotion recognition has been exploited. The proposed framework can recognize emotion from speech, facial expression, and both of them. There are three important parts in the design of the system: the facial emotion recognizer, the speech emotion recognizer, and the information fusion. The information fusion part uses the results from the speech emotion recognition and facial emotion recognition. Then, a novel weighted method is used to integrate the results, and a final decision of the emotion is given after the fusion.
The experiments show that with the weighted fusion methods, the accuracy can be improved to an average of 3.66% compared to fusion without adding weight. The improvement of the recognition rate can reach 18.27% and 5.66% compared to the speech emotion recognition and facial expression recognition, respectively. By improving the emotion recognition accuracy, the proposed multi-sensory emotion recognition system can help to improve the naturalness of human computer interaction
Contributions to the Modelling of Auditory Hallucinations, Social robotics, and Multiagent Systems
165 p.The Thesis covers three diverse lines of work that have been tackled with the central endeavor of modeling and understanding the phenomena under consideration. Firstly, the Thesis works on the problem of finding brain connectivity biomarkers of auditory hallucinations, a rather frequent phenomena that can be related some pathologies, but which is also present in healthy population. We apply machine learning techniques to assess the significance of effective brain connections extracted by either dynamical causal modeling or Granger causality. Secondly, the Thesis deals with the usefulness of social robotics strorytelling as a therapeutic tools for children at risk of exclussion. The Thesis reports on the observations gathered in several therapeutic sessions carried out in Spain and Bulgaria, under the supervision of tutors and caregivers. Thirdly, the Thesis deals with the spatio-temporal dynamic modeling of social agents trying to explain the phenomena of opinion survival of the social minorities. The Thesis proposes a eco-social model endowed with spatial mobility of the agents. Such mobility and the spatial perception of the agents are found to be strong mechanisms explaining opinion propagation and survival
Contributions to the Modelling of Auditory Hallucinations, Social robotics, and Multiagent Systems
165 p.The Thesis covers three diverse lines of work that have been tackled with the central endeavor of modeling and understanding the phenomena under consideration. Firstly, the Thesis works on the problem of finding brain connectivity biomarkers of auditory hallucinations, a rather frequent phenomena that can be related some pathologies, but which is also present in healthy population. We apply machine learning techniques to assess the significance of effective brain connections extracted by either dynamical causal modeling or Granger causality. Secondly, the Thesis deals with the usefulness of social robotics strorytelling as a therapeutic tools for children at risk of exclussion. The Thesis reports on the observations gathered in several therapeutic sessions carried out in Spain and Bulgaria, under the supervision of tutors and caregivers. Thirdly, the Thesis deals with the spatio-temporal dynamic modeling of social agents trying to explain the phenomena of opinion survival of the social minorities. The Thesis proposes a eco-social model endowed with spatial mobility of the agents. Such mobility and the spatial perception of the agents are found to be strong mechanisms explaining opinion propagation and survival
Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge
More than a decade has passed since research on automatic recognition of emotion from speech has become a new field of research in line with its 'big brothers' speech and speaker recognition. This article attempts to provide a short overview on where we are today, how we got there and what this can reveal us on where to go next and how we could arrive there. In a first part, we address the basic phenomenon reflecting the last fifteen years, commenting on databases, modelling and annotation, the unit of analysis and prototypicality. We then shift to automatic processing including discussions on features, classification, robustness, evaluation, and implementation and system integration. From there we go to the first comparative challenge on emotion recognition from speech-the INTERSPEECH 2009 Emotion Challenge, organised by (part of) the authors, including the description of the Challenge's database, Sub-Challenges, participants and their approaches, the winners, and the fusion of results to the actual learnt lessons before we finally address the ever-lasting problems and future promising attempts. (C) 2011 Elsevier B.V. All rights reserved.Schuller B., Batliner A., Steidl S., Seppi D., ''Recognising realistic emotions and affect in speech: state of the art and lessons learnt from the first challenge'', Speech communication, vol. 53, no. 9-10, pp. 1062-1087, November 2011.status: publishe
Recommended from our members
Wavelet analysis : linking multi-scalar pattern detection to ecological monitoring
Wavelet analysis is an analytical and modeling tool for optimizing sampling efficiency and accuracy, particularly in the context of designing long-term, large-scale monitoring plans. As a pattern analysis method that accommodates and preserves non-stationarity, wavelet analysis provides novel visualization and analytical capabilities for increased insight into interactions between multi-scalar heterogeneous pattern and sampling design. Effective monitoring must involve sampling designs sufficiently detailed to detect ecologically significant patterns at multiple scales, yet logistically tractable and
resource-efficient for sustained use. For this reason, methods that help optimize these
objectives and contribute to the design of more efficient sampling prior to implementation are important for successful large-scale monitoring. The main objectives in this dissertation were: (1) to explore Complexity Theory as a framework for pattern analysis in ecological monitoring for conservation of species and habitat; (2) to examine the relative capabilities of semivariogram, Fourier analysis, and onedimensional wavelet analysis to detect and classify spatio-temporal pattern in a comparison of stochastic processes, deterministic simulations, and empirical species range data for Western Meadowlarks; (3) to illustrate pattern detection and
reconstruction capabilities of two-dimensional wavelet analysis in three bird species (Neotropical migrants) with varying degrees of heterogeneity (Field Sparrow, Brewer's Sparrow, and Red-eyed Vireo); and (4) to compare statistical and ecological inference and examine these approaches within the context of the statistical analyses in landscape ecology. The sampling properties and behavior of these spatial statistics are described and illustrated in a comparison of spatio-temporal patterns in species range data from the Breeding Bird Surveys. Both one- and two-dimensional wavelet analyses were better suited than semivariogram and Fourier analysis in separating signal from noise to identify and characterize ecological pattern in the Neotropical migrants. Wavelet
analysis accommodates non-stationarity, compares multi-scalar pattern, localizes
detected pattern to original data, provides flexibility in choice of analyzing filter, and retains the context of the pattern to view the system as a complex space-time volume. Monitoring within the framework of Complexity Theory for conservation of species and habitats will be increasingly important as we progress into the Twenty-first Century
Climbing and Walking Robots
Nowadays robotics is one of the most dynamic fields of scientific researches. The shift of robotics researches from manufacturing to services applications is clear. During the last decades interest in studying climbing and walking robots has been increased. This increasing interest has been in many areas that most important ones of them are: mechanics, electronics, medical engineering, cybernetics, controls, and computers. Today’s climbing and walking robots are a combination of manipulative, perceptive, communicative, and cognitive abilities and they are capable of performing many tasks in industrial and non- industrial environments. Surveillance, planetary exploration, emergence rescue operations, reconnaissance, petrochemical applications, construction, entertainment, personal services, intervention in severe environments, transportation, medical and etc are some applications from a very diverse application fields of climbing and walking robots. By great progress in this area of robotics it is anticipated that next generation climbing and walking robots will enhance lives and will change the way the human works, thinks and makes decisions. This book presents the state of the art achievments, recent developments, applications and future challenges of climbing and walking robots. These are presented in 24 chapters by authors throughtot the world The book serves as a reference especially for the researchers who are interested in mobile robots. It also is useful for industrial engineers and graduate students in advanced study
Cognitive-developmental learning for a humanoid robot : a caregiver's gift
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 319-341).(cont.) which are then applied to developmentally acquire new object representations. The humanoid robot therefore sees the world through the caregiver's eyes. Building an artificial humanoid robot's brain, even at an infant's cognitive level, has been a long quest which still lies only in the realm of our imagination. Our efforts towards such a dimly imaginable task are developed according to two alternate and complementary views: cognitive and developmental.The goal of this work is to build a cognitive system for the humanoid robot, Cog, that exploits human caregivers as catalysts to perceive and learn about actions, objects, scenes, people, and the robot itself. This thesis addresses a broad spectrum of machine learning problems across several categorization levels. Actions by embodied agents are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. Taking inspiration from the human brain, a framework of algorithms and methodologies was implemented to emulate different cognitive capabilities on the humanoid robot Cog. This framework is effectively applied to a collection of AI, computer vision, and signal processing problems. Cognitive capabilities of the humanoid robot are developmentally created, starting from infant-like abilities for detecting, segmenting, and recognizing percepts over multiple sensing modalities. Human caregivers provide a helping hand for communicating such information to the robot. This is done by actions that create meaningful events (by changing the world in which the robot is situated) thus inducing the "compliant perception" of objects from these human-robot interactions. Self-exploration of the world extends the robot's knowledge concerning object properties. This thesis argues for enculturating humanoid robots using infant development as a metaphor for building a humanoid robot's cognitive abilities. A human caregiver redesigns a humanoid's brain by teaching the humanoid robot as she would teach a child, using children's learning aids such as books, drawing boards, or other cognitive artifacts. Multi-modal object properties are learned using these tools and inserted into several recognition schemes,by Artur Miguel Do Amaral Arsenio.Ph.D