5,420 research outputs found

    Investigation of window size in classification of EEG-emotion signal with wavelet entropy and support vector machine

    Full text link
    © 2015 IEEE. When dealing with patients with psychological or emotional symptoms, medical practitioners are often faced with the problem of objectively recognizing their patients' emotional state. In this paper, we approach this problem using a computer program that automatically extracts emotions from EEG signals. We extend the finding of Koelstra et. al [IEEE trans. affective comput., vol. 3, no. 1, pp. 18-31, 2012] using the same dataset (i.e. the DEAP: dataset for emotion analysis using electroencephalogram, physiological and video signals), where we observed that the accuracy can be further improved using wavelet features extracted from shorter time segments. More precisely, we achieved accuracy of 65% for both valence and arousal using the wavelet entropy of 3 to 12 seconds signal segments. This improvement in accuracy entails an important discovery that information on emotions contained in the EEG signal may be better described in term of wavelets and in shorter time segments

    Voice Analysis Using PRAAT Software and Classification of User Emotional State

    Get PDF
    During the last decades the field of IT has seen an incredible and very rapid development. This development has shown that it is important not only to shift performance and functional boundaries but also to adapt the way human-computer interaction to modern needs. One of the interaction possibilities is a voice control which nowadays can‘t be restricted only to direct commands. The goal of adaptive interaction between man and computer is the human needs understanding. The paper deals with the user's emotional state classification based on the voice track analysis, it describes its own solution - the measurement and the selection process of appropriate voice characteristics using ANOVA analysis and the use of PRAAT software for many voice aspects analysis and for the implementation of own application to classify the user's emotional state from his/her voice. In the paper are presented the results of the created application testing and the possibilities of further expansion and improvement of this solution

    Analysis of Textual and Non-Textual Sources of Sentiment in Github

    Get PDF
    Github is a collaborative platform that is used primarily for the development of software. In order to gain more insight into how teams work on Github, we wish to analyze the sentiment content available via communication on the platform. In order to do so, we first use existing sentiment analysis classifiers and compare the Github data to other social networks, Twitter and Reddit. By identifying that users are able to provide reactions to other users posts on Github, we use this as an indicator or label of sentiment information. Using this we first investigate whether repeated user interaction has an impact on sentiment and find that it is positively correlated to the amount of prior interaction as well as the directness of interaction. We also investigate if metrics corresponding to a user's status or power in a project correlate with positive sentiment received and find that it does. We then build sentiment classifiers using both textual and non textual information, both which outperform the generic sentiment scorer systems. In addition we show that a sentiment classifier built using only non-textual information can perform at a comparable level to that of a text-based classifier, indicating that there is significant sentiment information contained in non-textual information in the Github network

    Continuous Analysis of Affect from Voice and Face

    Get PDF
    Human affective behavior is multimodal, continuous and complex. Despite major advances within the affective computing research field, modeling, analyzing, interpreting and responding to human affective behavior still remains a challenge for automated systems as affect and emotions are complex constructs, with fuzzy boundaries and with substantial individual differences in expression and experience [7]. Therefore, affective and behavioral computing researchers have recently invested increased effort in exploring how to best model, analyze and interpret the subtlety, complexity and continuity (represented along a continuum e.g., from −1 to +1) of affective behavior in terms of latent dimensions (e.g., arousal, power and valence) and appraisals, rather than in terms of a small number of discrete emotion categories (e.g., happiness and sadness). This chapter aims to (i) give a brief overview of the existing efforts and the major accomplishments in modeling and analysis of emotional expressions in dimensional and continuous space while focusing on open issues and new challenges in the field, and (ii) introduce a representative approach for multimodal continuous analysis of affect from voice and face, and provide experimental results using the audiovisual Sensitive Artificial Listener (SAL) Database of natural interactions. The chapter concludes by posing a number of questions that highlight the significant issues in the field, and by extracting potential answers to these questions from the relevant literature. The chapter is organized as follows. Section 10.2 describes theories of emotion, Sect. 10.3 provides details on the affect dimensions employed in the literature as well as how emotions are perceived from visual, audio and physiological modalities. Section 10.4 summarizes how current technology has been developed, in terms of data acquisition and annotation, and automatic analysis of affect in continuous space by bringing forth a number of issues that need to be taken into account when applying a dimensional approach to emotion recognition, namely, determining the duration of emotions for automatic analysis, modeling the intensity of emotions, determining the baseline, dealing with high inter-subject expression variation, defining optimal strategies for fusion of multiple cues and modalities, and identifying appropriate machine learning techniques and evaluation measures. Section 10.5 presents our representative system that fuses vocal and facial expression cues for dimensional and continuous prediction of emotions in valence and arousal space by employing the bidirectional Long Short-Term Memory neural networks (BLSTM-NN), and introduces an output-associative fusion framework that incorporates correlations between the emotion dimensions to further improve continuous affect prediction. Section 10.6 concludes the chapter

    Reconnaissance de l'émotion thermique

    Full text link
    Pour améliorer les interactions homme-ordinateur dans les domaines de la santé, de l'e-learning et des jeux vidéos, de nombreux chercheurs ont étudié la reconnaissance des émotions à partir des signaux de texte, de parole, d'expression faciale, de détection d'émotion ou d'électroencéphalographie (EEG). Parmi eux, la reconnaissance d'émotion à l'aide d'EEG a permis une précision satisfaisante. Cependant, le fait d'utiliser des dispositifs d'électroencéphalographie limite la gamme des mouvements de l'utilisateur. Une méthode non envahissante est donc nécessaire pour faciliter la détection des émotions et ses applications. C'est pourquoi nous avons proposé d'utiliser une caméra thermique pour capturer les changements de température de la peau, puis appliquer des algorithmes d'apprentissage machine pour classer les changements d'émotion en conséquence. Cette thèse contient deux études sur la détection d'émotion thermique avec la comparaison de la détection d'émotion basée sur EEG. L'un était de découvrir les profils de détection émotionnelle thermique en comparaison avec la technologie de détection d'émotion basée sur EEG; L'autre était de construire une application avec des algorithmes d'apprentissage en machine profonds pour visualiser la précision et la performance de la détection d'émotion thermique et basée sur EEG. Dans la première recherche, nous avons appliqué HMM dans la reconnaissance de l'émotion thermique, et après avoir comparé à la détection de l'émotion basée sur EEG, nous avons identifié les caractéristiques liées à l'émotion de la température de la peau en termes d'intensité et de rapidité. Dans la deuxième recherche, nous avons mis en place une application de détection d'émotion qui supporte à la fois la détection d'émotion thermique et la détection d'émotion basée sur EEG en appliquant les méthodes d'apprentissage par machine profondes - Réseau Neuronal Convolutif (CNN) et Mémoire à long court-terme (LSTM). La précision de la détection d'émotion basée sur l'image thermique a atteint 52,59% et la précision de la détection basée sur l'EEG a atteint 67,05%. Dans une autre étude, nous allons faire plus de recherches sur l'ajustement des algorithmes d'apprentissage machine pour améliorer la précision de détection d'émotion thermique.To improve computer-human interactions in the areas of healthcare, e-learning and video games, many researchers have studied on recognizing emotions from text, speech, facial expressions, emotion detection, or electroencephalography (EEG) signals. Among them, emotion recognition using EEG has achieved satisfying accuracy. However, wearing electroencephalography devices limits the range of user movement, thus a noninvasive method is required to facilitate the emotion detection and its applications. That’s why we proposed using thermal camera to capture the skin temperature changes and then applying machine learning algorithms to classify emotion changes accordingly. This thesis contains two studies on thermal emotion detection with the comparison of EEG-base emotion detection. One was to find out the thermal emotional detection profiles comparing with EEG-based emotion detection technology; the other was to implement an application with deep machine learning algorithms to visually display both thermal and EEG based emotion detection accuracy and performance. In the first research, we applied HMM in thermal emotion recognition, and after comparing with EEG-base emotion detection, we identified skin temperature emotion-related features in terms of intensity and rapidity. In the second research, we implemented an emotion detection application supporting both thermal emotion detection and EEG-based emotion detection with applying the deep machine learning methods – Convolutional Neutral Network (CNN) and LSTM (Long- Short Term Memory). The accuracy of thermal image based emotion detection achieved 52.59% and the accuracy of EEG based detection achieved 67.05%. In further study, we will do more research on adjusting machine learning algorithms to improve the thermal emotion detection precision

    Dominant Lyapunov exponent and approximate entropy in heart rate variability during emotional visual elicitation

    Get PDF
    In this work we characterized the non-linear complexity of Heart Rate Variability (HRV) in short time series. The complexity of HRV signal was evaluated during emotional visual elicitation by using Dominant Lyapunov Exponents (DLEs) and Approximate Entropy (ApEn). We adopted a simplified model of emotion derived from the Circumplex Model of Affects (CMAs), in which emotional mechanisms are conceptualized in two dimensions by the terms of valence and arousal. Following CMA model, a set of standardized visual stimuli in terms of arousal and valence gathered from the International Affective Picture System (IAPS) was administered to a group of 35 healthy volunteers. Experimental session consisted of eight sessions alternating neutral images with high arousal content images. Several works can be found in the literature showing a chaotic dynamics of HRV during rest or relax conditions. The outcomes of this work showed a clear switching mechanism between regular and chaotic dynamics when switching from neutral to arousal elicitation. Accordingly, the mean ApEn decreased with statistical significance during arousal elicitation and the DLE became negative. Results showed a clear distinction between the neutral and the arousal elicitation and could be profitably exploited to improve the accuracy of emotion recognition systems based on HRV time series analysis

    Clinimetrics and functional outcome one year after traumatic brain injury

    Get PDF
    corecore