226 research outputs found

    EEG-based emotion classification using innovative features and combined SVM and HMM classifier

    Full text link
    © 2017 IEEE. Emotion classification is one of the state-of-the-art topics in biomedical signal research, and yet a significant portion remains unknown. This paper offers a novel approach with a combined classifier to recognise human emotion states based on electroencephalogram (EEG) signal. The objective is to achieve high accuracy using the combined classifier designed, which categorises the extracted features calculated from time domain features and Discrete Wavelet Transform (DWT). Two innovative designs are involved in this project: a novel variable is established as a new feature and a combined SVM and HMM classifier is developed. The result shows that the joined features raise the accuracy by 5% on valence axis and 1.5% on arousal axis. The combined classifier can improve the accuracy by 3% comparing with SVM classifier. One of the important applications for high accuracy emotion classification system is offering a powerful tool for psychologists to diagnose emotion related mental diseases and the system developed in this project has the potential to serve such purpose

    Face Emotion Recognition Based on Machine Learning: A Review

    Get PDF
    Computers can now detect, understand, and evaluate emotions thanks to recent developments in machine learning and information fusion. Researchers across various sectors are increasingly intrigued by emotion identification, utilizing facial expressions, words, body language, and posture as means of discerning an individual's emotions. Nevertheless, the effectiveness of the first three methods may be limited, as individuals can consciously or unconsciously suppress their true feelings. This article explores various feature extraction techniques, encompassing the development of machine learning classifiers like k-nearest neighbour, naive Bayesian, support vector machine, and random forest, in accordance with the established standard for emotion recognition. The paper has three primary objectives: firstly, to offer a comprehensive overview of effective computing by outlining essential theoretical concepts; secondly, to describe in detail the state-of-the-art in emotion recognition at the moment; and thirdly, to highlight important findings and conclusions from the literature, with an emphasis on important obstacles and possible future paths, especially in the creation of state-of-the-art machine learning algorithms for the identification of emotions

    Critical Analysis on Multimodal Emotion Recognition in Meeting the Requirements for Next Generation Human Computer Interactions

    Get PDF
    Emotion recognition is the gap in today’s Human Computer Interaction (HCI). These systems lack the ability to effectively recognize, express and feel emotion limits in their human interaction. They still lack the better sensitivity to human emotions. Multi modal emotion recognition attempts to addresses this gap by measuring emotional state from gestures, facial expressions, acoustic characteristics, textual expressions. Multi modal data acquired from video, audio, sensors etc. are combined using various techniques to classify basis human emotions like happiness, joy, neutrality, surprise, sadness, disgust, fear, anger etc. This work presents a critical analysis of multi modal emotion recognition approaches in meeting the requirements of next generation human computer interactions. The study first explores and defines the requirements of next generation human computer interactions and critically analyzes the existing multi modal emotion recognition approaches in addressing those requirements

    Cognitive behaviour analysis based on facial information using depth sensors

    Get PDF
    Cognitive behaviour analysis is considered of high impor- tance with many innovative applications in a range of sectors including healthcare, education, robotics and entertainment. In healthcare, cogni- tive and emotional behaviour analysis helps to improve the quality of life of patients and their families. Amongst all the different approaches for cognitive behaviour analysis, significant work has been focused on emo- tion analysis through facial expressions using depth and EEG data. Our work introduces an emotion recognition approach using facial expres- sions based on depth data and landmarks. A novel dataset was created that triggers emotions from long or short term memories. This work uses novel features based on a non-linear dimensionality reduction, t-SNE, applied on facial landmarks and depth data. Its performance was eval- uated in a comparative study, proving that our approach outperforms other state-of-the-art features

    EEG-induced Fear-type Emotion Classification Through Wavelet Packet Decomposition, Wavelet Entropy, and SVM

    Get PDF
    Among the most significant characteristics of human beings is their ability to feel emotions. In recent years, human-machine interface (HM) research has centered on ways to empower the classification of emotions. Mainly, human-computer interaction (HCI) research concentrates on methods that enable computers to reveal the emotional states of humans. In this research, an emotion detection system based on visual IAPPS pictures through EMOTIV EPOC EEG signals was proposed. We employed EEG signals acquired from channels (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4) for individuals in a visual induced setting (IAPS fear and neutral aroused pictures). The wavelet packet transform (WPT) combined with the wavelet entropy algorithm was applied to the EEG signals. The entropy values were extracted for every two classes. Finally, these feature matrices were fed into the SVM (Support Vector Machine) type classifier to generate the classification model. Also, we evaluated the proposed algorithm as area under the ROC (Receiver Operating Characteristic) curve, or simply AUC (Area under the curve) was utilized as an alternative single-number measure. Overall classification accuracy was obtained at 91.0%. For classification, the AUC value given for SVM was 0.97. The calculations confirmed that the proposed approaches are successful for the detection of the emotion of fear stimuli via EMOTIV EPOC EEG signals and that the accuracy of the classification is acceptable

    Interfaces avanzados aplicados a la interacción musical

    Get PDF
    The latest advances in human-computer interaction technologies have brought forth changes in the way we interact with computing devices of any kind, from the standard desktop computer to the more recent smartphones. The development of these technologies has thus introduced new interaction metaphors that provide more enriching experiences for a wide range of different applications. Music is one of most ancient forms of art and entertainment that can be found in our legacy, and conforms a strong interactive experience on itself. The application of new technologies to enhance music computer-based interaction paradigms can potentially provide all sorts of improvements: providing low-cost access to music rehearsal, lowering knowledge barriers in regard to music learning, virtual instrument simulation, etc. Yet, surprisingly, there has been rather limited research on the application of new interaction models and technologies to the specific field of music interaction in regard to other areas. This thesis aims to address the aforementioned need by presenting a set of studies which cover the use of innovative interaction models for music-based applications, from interaction paradigms for music learning to more entertainment-oriented interaction interfaces, such as virtual musical instruments, ensemble conductor simulation, etc. The main contributions of this thesis are: · It is shown that the use of signal processing techniques on the music signal and music information retrieval techniques can create enticing interfaces for music learning. Concretely, the research conducted includes the implementation and experimental evaluation of a set of different learning-oriented applications which make use of these techniques to implement inexpensive, easy-to-use human-computer interfaces, which serve as support tools in music learning processes. · This thesis explores the use of tracking systems and machine learning techniques to achieve more sophisticated interfaces for innovative music interaction paradigms. Concretely, the studies conducted have shown that it is indeed feasible to emulate the functionally of musical instruments such as the drumkit or the theremin. In a similar way, it is shown that more complex musical roles can also be recreated through the use of new interaction models, such as the case of the ensemble conductor or a step-aerobics application. · The benefits in using advanced human-computer interfaces in musical experiences are review and assessed through experimental evaluation. It is shown that the addition of these interfaces contributes positively to user perception, providing more satisfying and enriching experiences overall. · The thesis also illustrates that the use of machine learning algoriths and signal processing along with new interaction devices provides an effective framework for human gesture recognition and prediction, and even mood estimation

    A study of the temporal relationship between eye actions and facial expressions

    Get PDF
    A dissertation submitted in ful llment of the requirements for the degree of Master of Science in the School of Computer Science and Applied Mathematics Faculty of Science August 15, 2017Facial expression recognition is one of the most common means of communication used for complementing spoken word. However, people have grown to master ways of ex- hibiting deceptive expressions. Hence, it is imperative to understand di erences in expressions mostly for security purposes among others. Traditional methods employ machine learning techniques in di erentiating real and fake expressions. However, this approach does not always work as human subjects can easily mimic real expressions with a bit of practice. This study presents an approach that evaluates the time related dis- tance that exists between eye actions and an exhibited expression. The approach gives insights on some of the most fundamental characteristics of expressions. The study fo- cuses on nding and understanding the temporal relationship that exists between eye blinks and smiles. It further looks at the relationship that exits between eye closure and pain expressions. The study incorporates active appearance models (AAM) for feature extraction and support vector machines (SVM) for classi cation. It tests extreme learn- ing machines (ELM) in both smile and pain studies, which in turn, attains excellent results than predominant algorithms like the SVM. The study shows that eye blinks are highly correlated with the beginning of a smile in posed smiles while eye blinks are highly correlated with the end of a smile in spontaneous smiles. A high correlation is observed between eye closure and pain in spontaneous pain expressions. Furthermore, this study brings about ideas that lead to potential applications such as lie detection systems, robust health care monitoring systems and enhanced animation design systems among others.MT 201

    Intelligent Biosignal Analysis Methods

    Get PDF
    This book describes recent efforts in improving intelligent systems for automatic biosignal analysis. It focuses on machine learning and deep learning methods used for classification of different organism states and disorders based on biomedical signals such as EEG, ECG, HRV, and others

    A study of the temporal relationship between eye actions and facial expressions

    Get PDF
    A dissertation submitted in ful llment of the requirements for the degree of Master of Science in the School of Computer Science and Applied Mathematics Faculty of Science August 15, 2017Facial expression recognition is one of the most common means of communication used for complementing spoken word. However, people have grown to master ways of ex- hibiting deceptive expressions. Hence, it is imperative to understand di erences in expressions mostly for security purposes among others. Traditional methods employ machine learning techniques in di erentiating real and fake expressions. However, this approach does not always work as human subjects can easily mimic real expressions with a bit of practice. This study presents an approach that evaluates the time related dis- tance that exists between eye actions and an exhibited expression. The approach gives insights on some of the most fundamental characteristics of expressions. The study fo- cuses on nding and understanding the temporal relationship that exists between eye blinks and smiles. It further looks at the relationship that exits between eye closure and pain expressions. The study incorporates active appearance models (AAM) for feature extraction and support vector machines (SVM) for classi cation. It tests extreme learn- ing machines (ELM) in both smile and pain studies, which in turn, attains excellent results than predominant algorithms like the SVM. The study shows that eye blinks are highly correlated with the beginning of a smile in posed smiles while eye blinks are highly correlated with the end of a smile in spontaneous smiles. A high correlation is observed between eye closure and pain in spontaneous pain expressions. Furthermore, this study brings about ideas that lead to potential applications such as lie detection systems, robust health care monitoring systems and enhanced animation design systems among others.MT 201
    corecore