2,924 research outputs found

    A Dual-Modality Emotion Recognition System of EEG and Facial Images and its Application in Educational Scene

    Get PDF
    With the development of computer science, people's interactions with computers or through computers have become more frequent. Some human-computer interactions or human-to-human interactions that are often seen in daily life: online chat, online banking services, facial recognition functions, etc. Only through text messaging, however, can the effect of information transfer be reduced to around 30% of the original. Communication becomes truly efficient when we can see one other's reactions and feel each other's emotions. This issue is especially noticeable in the educational field. Offline teaching is a classic teaching style in which teachers may determine a student's present emotional state based on their expressions and alter teaching methods accordingly. With the advancement of computers and the impact of Covid-19, an increasing number of schools and educational institutions are exploring employing online or video-based instruction. In such circumstances, it is difficult for teachers to get feedback from students. Therefore, an emotion recognition method is proposed in this thesis that can be used for educational scenarios, which can help teachers quantify the emotional state of students in class and be used to guide teachers in exploring or adjusting teaching methods. Text, physiological signals, gestures, facial photographs, and other data types are commonly used for emotion recognition. Data collection for facial images emotion recognition is particularly convenient and fast among them, although there is a problem that people may subjectively conceal true emotions, resulting in inaccurate recognition results. Emotion recognition based on EEG waves can compensate for this drawback. Taking into account the aforementioned issues, this thesis first employs the SVM-PCA to classify emotions in EEG data, then employs the deep-CNN to classify the emotions of the subject's facial images. Finally, the D-S evidence theory is used for fusing and analyzing the two classification results and obtains the final emotion recognition accuracy of 92%. The specific research content of this thesis is as follows: 1) The background of emotion recognition systems used in teaching scenarios is discussed, as well as the use of various single modality systems for emotion recognition. 2) Detailed analysis of EEG emotion recognition based on SVM. The theory of EEG signal generation, frequency band characteristics, and emotional dimensions is introduced. The EEG signal is first filtered and processed with artifact removal. The processed EEG signal is then used for feature extraction using wavelet transforms. It is finally fed into the proposed SVM-PCA for emotion recognition and the accuracy is 64%. 3) Using the proposed deep-CNN to recognize emotions in facial images. Firstly, the Adaboost algorithm is used to detect and intercept the face area in the image, and the gray level balance is performed on the captured image. Then the preprocessed images are trained and tested using the deep-CNN, and the average accuracy is 88%. 4) Fusion method based on decision-making layer. The data fusion at the decision level is carried out with the results of EEG emotion recognition and facial expression emotion recognition. The final dual-modality emotion recognition results and system accuracy of 92% are obtained using D-S evidence theory. 5) The dual-modality emotion recognition system's data collection approach is designed. Based on the process, the actual data in the educational scene is collected and analyzed. The final accuracy of the dual-modality system is 82%. Teachers can use the emotion recognition results as a guide and reference to improve their teaching efficacy

    EEG Based Emotion Monitoring Using Wavelet and Learning Vector Quantization

    Get PDF
    Emotional identification is necessary for example in Brain Computer Interface (BCI) application and when emotional therapy and medical rehabilitation take place. Some emotional states can be characterized in the frequency of EEG signal, such excited, relax and sad. The signal extracted in certain frequency useful to distinguish the three emotional state. The classification of the EEG signal in real time depends on extraction methods to increase class distinction, and identification methods with fast computing. This paper proposed human emotion monitoring in real time using Wavelet and Learning Vector Quantization (LVQ). The process was done before the machine learning using training data from the 10 subjects, 10 trial, 3 classes and 16 segments (equal to 480 sets of data). Each data set processed in 10 seconds and extracted into Alpha, Beta, and Theta waves using Wavelet. Then they become input for the identification system using LVQ three emotional state that is excited, relax, and sad. The results showed that by using wavelet we can improve the accuracy of 72% to 87% and number of training data variation increased the accuracy. The system was integrated with wireless EEG to monitor emotion state in real time with change each 10 seconds. It takes 0.44 second, was not significant toward 10 seconds

    A Python-based Brain-Computer Interface Package for Neural Data Analysis

    Get PDF
    Anowar, Md Hasan, A Python-based Brain-Computer Interface Package for Neural Data Analysis. Master of Science (MS), December, 2020, 70 pp., 4 tables, 23 figures, 74 references. Although a growing amount of research has been dedicated to neural engineering, only a handful of software packages are available for brain signal processing. Popular brain-computer interface packages depend on commercial software products such as MATLAB. Moreover, almost every brain-computer interface software is designed for a specific neuro-biological signal; there is no single Python-based package that supports motor imagery, sleep, and stimulated brain signal analysis. The necessity to introduce a brain-computer interface package that can be a free alternative for commercial software has motivated me to develop a toolbox using the python platform. In this thesis, the structure of MEDUSA, a brain-computer interface toolbox, is presented. The features of the toolbox are demonstrated with publicly available data sources. The MEDUSA toolbox provides a valuable tool to biomedical engineers and computational neuroscience researchers

    Brain-computer interface of focus and motor imagery using wavelet and recurrent neural networks

    Get PDF
    Brain-computer interface is a technology that allows operating a device without involving muscles and sound, but directly from the brain through the processed electrical signals. The technology works by capturing electrical or magnetic signals from the brain, which are then processed to obtain information contained therein. Usually, BCI uses information from electroencephalogram (EEG) signals based on various variables reviewed. This study proposed BCI to move external devices such as a drone simulator based on EEG signal information. From the EEG signal was extracted to get motor imagery (MI) and focus variable using wavelet. Then, they were classified by recurrent neural networks (RNN). In overcoming the problem of vanishing memory from RNN, was used long short-term memory (LSTM). The results showed that BCI used wavelet, and RNN can drive external devices of non-training data with an accuracy of 79.6%. The experiment gave AdaDelta model is better than the Adam model in terms of accuracy and value losses. Whereas in computational learning time, Adam's model is faster than AdaDelta's model

    Emotion and Attention of Neuromarketing Using Wavelet and Recurrent Neural Networks

    Get PDF
    One method concerning evaluating video ads is neuromarketing. This information comes from the viewer's mind, thus minimizing subjectivity. Besides, neuromarketing can overcome the difficulties of respondents who sometimes do not know the response to the video ads they watch. Neuromarketing is based on neuropsychology, which is sourced from the human brain through electrical activity signals recorded by Electroencephalogram. Usually, Neuropsychology consists of emotions, attention, and concentration. This research proposed the Wavelet method and Recurrent Neural Networks to measure the emotional and attention variable of neuropsychology in real-time every two seconds while watching video ads. The results showed that Wavelet and Recurrent Neural Networks could provide training data accuracy of 100% and 89.73% for new data. The experiment also gave that the RMSprop optimization model for the weight correction contributed to higher correctness of 1.34% than the Adam model. Meanwhile, using Wavelet for extraction can increase accuracy by 4%

    Emotion brain-computer interface using wavelet and recurrent neural networks

    Get PDF
    Brain-Computer Interface (BCI) has an intermediate tool that is usually obtained from EEG signal information. This paper proposed the BCI to control a robot simulator based on three emotions for five seconds by extracting a wavelet function in advance with Recurrent Neural Networks (RNN). Emotion is amongst variables of the brain that can be used to move external devices. BCI's success depends on the ability to recognize one person’s emotions by extracting their EEG signals. One method to appropriately recognize EEG signals as a moving signal is wavelet transformation. Wavelet extracted EEG signal into theta, alpha, and beta wave, and consider them as the input of the RNN technique. Connectivity between sequences is accomplished with Long Short-Term Memory (LSTM). The study also compared frequency extraction methods using Fast Fourier Transform (FFT). The results showed that by extracting EEG signals using Wavelet transformations, we could achieve a confident accuracy of 100% for the training data and 70.54% of new data. While the same RNN configuration without pre-processing provided 39% accuracy, even adding FFT would only increase it to 52%. Furthermore, by using features of the frequency filter, we can increase its accuracy from 70.54% to 79.3%. These results showed the importance of selecting features because of RNNs concern to sequenced its inputs. The use of emotional variables is still relevant for instructions on BCI-based external devices, which provide an average computing time of merely 0.235 seconds

    Nonparametric Weight Initialization of Neural Networks via Integral Representation

    Full text link
    A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases.Comment: For ICLR2014, revised into 9 pages; revised into 12 pages (with supplements
    • …
    corecore