5 research outputs found

    Sinyal Elektroensefalografi Untuk Deteksi Emosi Saat Mendengar Stimulus Pembacaan Al-Quran Menggunakan Wavelet Transform

    Get PDF
    Mendengarkan suara membaca Al-Qur'an (Murottal) diketahui sering digunakan untuk membuat suasana terasa santai. Oleh karena itu, dalam penelitian ini, kami menyelidiki sejauh mana stimulasi suara murottal mempengaruhi penampilan gelombang alfa yang terlihat pada gelombang otak menggunakan detektor sinyal Electoencephalography (EEG). Menggunakan Transformasi Wavelet. Gelombang otak yang terdeteksi oleh sinyal EEG kemudian dianalisis untuk setiap fase gelombang pada frekuensi alfa (8-13 Hz) untuk melihat keadaan rileks. Kami merekam data gelombang EEG dalam 4 kondisi, yaitu kondisi tenang, kondisi tegang, dan keduanya dengan stimulus suara murottal. Setiap kondisi dilakukan masing-masing selama 2 menit. Suara murottal diambil secara acak untuk mendapatkan variasi data. Hasil klasifikasi menggunakan Recurrent Neural Network (RNN) menunjukkan bahwa t raining menggunakan n data ormal dengan tombak s mencapai akurasi 52% ~ 59%, Normal dengan m urottal n ormal menghasilkan nilai akurasi 55% ~ 56%, normal dengan tombak m urottal s mendapatkan nilai akurasi terkecil 35% ~ 46%, s Pike dengan m urrottal n ormal mencapai akurasi 57% ~ 67%, pike S dengan pike M urottal smenghasilkan akurasi 51% ~ 60%, M urottal normal dengan pike M urottal S mencapai nilai akurasi tertinggi 78%. Hal ini menunjukkan bahwa terdapat pengaruh yang signifikan dalam mendengarkan Murottal Al-Quran

    STILN: A Novel Spatial-Temporal Information Learning Network for EEG-based Emotion Recognition

    Full text link
    The spatial correlations and the temporal contexts are indispensable in Electroencephalogram (EEG)-based emotion recognition. However, the learning of complex spatial correlations among several channels is a challenging problem. Besides, the temporal contexts learning is beneficial to emphasize the critical EEG frames because the subjects only reach the prospective emotion during part of stimuli. Hence, we propose a novel Spatial-Temporal Information Learning Network (STILN) to extract the discriminative features by capturing the spatial correlations and temporal contexts. Specifically, the generated 2D power topographic maps capture the dependencies among electrodes, and they are fed to the CNN-based spatial feature extraction network. Furthermore, Convolutional Block Attention Module (CBAM) recalibrates the weights of power topographic maps to emphasize the crucial brain regions and frequency bands. Meanwhile, Batch Normalizations (BNs) and Instance Normalizations (INs) are appropriately combined to relieve the individual differences. In the temporal contexts learning, we adopt the Bidirectional Long Short-Term Memory Network (Bi-LSTM) network to capture the dependencies among the EEG frames. To validate the effectiveness of the proposed method, subject-independent experiments are conducted on the public DEAP dataset. The proposed method has achieved the outstanding performance, and the accuracies of arousal and valence classification have reached 0.6831 and 0.6752 respectively

    A neurophysiological signature of dynamic emotion recognition associated with social communication skills and cortical gamma-aminobutyric acid levels in children

    Get PDF
    IntroductionEmotion recognition is a core feature of social perception. In particular, perception of dynamic facial emotional expressions is a major feature of the third visual pathway. However, the classical N170 visual evoked signal does not provide a pure correlate of such processing. Indeed, independent component analysis has demonstrated that the N170 component is already active at the time of the P100, and is therefore distorted by early components. Here we implemented, a dynamic face emotional paradigm to isolate a more pure face expression selective N170. We searched for a neural correlate of perception of dynamic facial emotional expressions, by starting with a face baseline from which a facial expression evolved. This allowed for a specific facial expression contrast signal which we aimed to relate with social communication abilities and cortical gamma-aminobutyric acid (GABA) levels.MethodsWe recorded event-related potentials (ERPs) and Magnetic Resonance (MRS) measures in 35 typically developing (TD) children, (10–16 years) sex-matched, during emotion recognition of an avatar morphing/unmorphing from neutral to happy/sad expressions. This task allowed for the elimination of the contribution low-level visual components, in particular the P100, by morphing baseline isoluminant neutral faces into specific expressions, isolating dynamic emotion recognition. Therefore, it was possible to isolate a dynamic face sensitive N170 devoid of interactions with earlier components.ResultsWe found delayed N170 and P300, with a hysteresis type of dependence on stimulus trajectory (morphing/unmorphing), with hemispheric lateralization. The delayed N170 is generated by an extrastriate source, which can be related to the third visual pathway specialized in biological motion processing. GABA levels in visual cortex were related with N170 amplitude and latency and predictive of worse social communication performance (SCQ scores). N170 latencies reflected delayed processing speed of emotional expressions and related to worse social communication scores.DiscussionIn sum, we found a specific N170 electrophysiological signature of dynamic face processing related to social communication abilities and cortical GABA levels. These findings have potential clinical significance supporting the hypothesis of a spectrum of social communication abilities and the identification of a specific face-expression sensitive N170 which can potentially be used in the development of diagnostic and intervention tools

    Wearable Technology for Mental Wellness Monitoring and Feedback

    Get PDF
    This thesis investigates the transformative potential of wearable monitoring devices in empowering individuals to make positive lifestyle changes and enhance mental well-being. The primary objective is to assess the efficacy of these devices in addressing mental health issues, with a specific focus on stress and anxiety biomarkers. The research includes a systematic literature review that uniquely emphasizes integrating wearable technology into mental wellness, spanning diverse domains such as electronics, wearable technology, machine learning, and data analysis. This novel systematic literature review encompasses the period from 2010 to 2023, examining the profound impact of the Internet of Things (IoT) across various sectors, particularly healthcare. The thesis extensively explores wearable technologies capable of identifying a broad spectrum of human biomarkers and stress-related indicators, emphasizing their potential benefits for healthcare professionals. Challenges faced by participants and researchers in the practical implementation of wearable technology are addressed through survey analysis, providing substantial evidence for the potential of wearables in bolstering mental health within professional environments. Meticulous data analysis gathering from biosignals captured by wearables investigates the impact of stress factors and anxiety on individuals' mental well-being. The study concludes with a thorough discussion of the findings and their implications. Additionally, integrating Photoplethysmography (PPG) devices is highlighted as a significant advancement in capturing vital biomarkers associated with stress and mental well-being. Through light-based technology, PPG devices monitor blood volume changes in microvascular tissue, providing real-time information on heart rate variability (HRV). This non-invasive approach enables continuous monitoring, offering a dynamic understanding of physiological responses to stressors. The reliability of wearable devices equipped with PPG and Electroencephalography (EEG) sensors is emphasized in capturing differences in subject biomarkers. EEG devices measure brainwave patterns, providing insights into neural activity associated with stress and emotional states. The combination of PPG and EEG data enhances the precision of stress and mental well-being assessments, offering a holistic approach that captures peripheral physiological responses and central nervous system activity. In conclusion, integrating PPG devices with subjective methods and EEG sensors significantly advances stress and mental well-being assessment. This multidimensional approach improves measurement accuracy, laying the foundation for personalized interventions and innovative solutions in mental health care. The thesis also evaluates body sensors and their correlation with medically established gold references, exploring the potential of wearable devices in advancing mental health and well-being

    The Fusion of Electroencephalography and Facial Expression for Continuous Emotion Recognition

    No full text
    corecore