1,914 research outputs found

    Affect Recognition Using Electroencephalography Features

    Get PDF
    Affect is the psychological display of emotion often described with three principal dimensions: 1) valence 2) arousal and 3) dominance. This thesis work explores the ability of computers to recognize human emotions using Electroencephalography (EEG) features. The development of computer systems to classify human emotions using physiological signals has recently gained pace in the research and technological community. This is because by using EEG to analyze the cognitive state one will be able to establish a direct communication channel between a computer and the human brain. Other applications of recognizing the affective states from EEG include identifying stress and cognitive workload on individuals and assist them in relaxation. This thesis is an extensive study on the design of paradigms that help computer systems recognize emotional states given a multichannel Electroencephalogram (EEG) segment. The process of first extracting features from the EEG signals using signal processing and then constructing a predictive model via machine learning is often referred to as paradigms. In this work, we will first present a brief review of the state-of-the-art paradigms that have contributed to the topic of emotional affect recognition. Then the proposed paradigms to recognize the principal dimensions of affect are detailed. Feature selection is also performed in order to select the relevant features. The evaluation of the models created to predict the affective states will be performed quantitatively by calculating the generalization accuracy and qualitatively by interpreting them

    A mutual information based adaptive windowing of informative EEG for emotion recognition

    Get PDF
    Emotion recognition using brain wave signals involves using high dimensional electroencephalogram (EEG) data. In this paper, a window selection method based on mutual information is introduced to select an appropriate signal window to reduce the length of the signals. The motivation of the windowing method comes from EEG emotion recognition being computationally costly and the data having low signal-to-noise ratio. The aim of the windowing method is to find a reduced signal where the emotions are strongest. In this paper, it is suggested, that using only the signal section which best describes emotions improves the classification of emotions. This is achieved by iteratively comparing different-length EEG signals at different time locations using the mutual information between the reduced signal and emotion labels as criterion. The reduced signal with the highest mutual information is used for extracting the features for emotion classification. In addition, a viable framework for emotion recognition is introduced. Experimental results on publicly available datasets, DEAP and MAHNOB-HCI, show significant improvement in emotion recognition accuracy

    A multiplex connectivity map of valence-arousal emotional model

    Get PDF
    high number of studies have already demonstrated an electroencephalography (EEG)-based emotion recognition system with moderate results. Emotions are classified into discrete and dimensional models. We focused on the latter that incorporates valence and arousal dimensions. The mainstream methodology is the extraction of univariate measures derived from EEG activity from various frequencies classifying trials into low/high valence and arousal levels. Here, we evaluated brain connectivity within and between brain frequencies under the multiplexity framework. We analyzed an EEG database called DEAP that contains EEG responses to video stimuli and users’ emotional self-assessments. We adopted a dynamic functional connectivity analysis under the notion of our dominant coupling model (DoCM). DoCM detects the dominant coupling mode per pair of EEG sensors, which can be either within frequencies coupling (intra) or between frequencies coupling (cross-frequency). DoCM revealed an integrated dynamic functional connectivity graph (IDFCG) that keeps both the strength and the preferred dominant coupling mode. We aimed to create a connectomic mapping of valence-arousal map via employing features derive from IDFCG. Our results outperformed previous findings succeeding to predict in a high accuracy participants’ ratings in valence and arousal dimensions based on a flexibility index of dominant coupling modes

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Physiological-based Driver Monitoring Systems: A Scoping Review

    Get PDF
    A physiological-based driver monitoring system (DMS) has attracted research interest and has great potential for providing more accurate and reliable monitoring of the driver’s state during a driving experience. Many driving monitoring systems are driver behavior-based or vehicle-based. When these non-physiological based DMS are coupled with physiological-based data analysis from electroencephalography (EEG), electrooculography (EOG), electrocardiography (ECG), and electromyography (EMG), the physical and emotional state of the driver may also be assessed. Drivers’ wellness can also be monitored, and hence, traffic collisions can be avoided. This paper highlights work that has been published in the past five years related to physiological-based DMS. Specifically, we focused on the physiological indicators applied in DMS design and development. Work utilizing key physiological indicators related to driver identification, driver alertness, driver drowsiness, driver fatigue, and drunk driver is identified and described based on the PRISMA Extension for Scoping Reviews (PRISMA-Sc) Framework. The relationship between selected papers is visualized using keyword co-occurrence. Findings were presented using a narrative review approach based on classifications of DMS. Finally, the challenges of physiological-based DMS are highlighted in the conclusion. Doi: 10.28991/CEJ-2022-08-12-020 Full Text: PD

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale

    Improving Emotion Recognition Systems by Exploiting the Spatial Information of EEG Sensors

    Get PDF
    Electroencephalography (EEG)-based emotion recognition is gaining increasing importance due to its potential applications in various scientific fields, ranging from psychophysiology to neuromarketing. A number of approaches have been proposed that use machine learning (ML) technology to achieve high recognition performance, which relies on engineering features from brain activity dynamics. Since ML performance can be improved by utilizing 2D feature representation that exploits the spatial relationships among the features, here we propose a novel input representation that involves re-arranging EEG features as an image that reflects the top view of the subject’s scalp. This approach enables emotion recognition through image-based ML methods such as pre-trained deep neural networks or "trained-from-scratch" convolutional neural networks. We have employed both of these techniques in our study to demonstrate the effectiveness of our proposed input representation. We also compare the recognition performance of these methods against state-of-the-art tabular data analysis approaches, which do not utilize the spatial relationships between the sensors. We test our proposed approach using two publicly available benchmark datasets for EEG-based emotion recognition tasks, namely DEAP and MAHNOB-HCI. Our results show that the "trained-from-scratch" convolutional neural network outperforms the best approaches in the literature, achieving 97.8% and 98.3% accuracy in valence and arousal classification on MAHNOB-HCI, and 91% and 90.4% on DEAP, respectively
    corecore