1,467 research outputs found
Detection of intention level in response to task difficulty from EEG signals
We present an approach that enables detecting intention levels of subjects in response to task difficulty utilizing an electroencephalogram (EEG) based brain-computer interface (BCI). In particular, we use linear discriminant analysis (LDA) to classify event-related synchronization (ERS) and desynchronization (ERD) patterns associated with right elbow flexion and extension movements, while lifting different weights. We observe that it is possible to classify tasks of varying difficulty based on EEG signals. Additionally, we also present a correlation analysis between intention levels detected from EEG and surface electromyogram (sEMG) signals. Our experimental results suggest that it is possible to extract the intention level information from EEG signals in response to task difficulty and indicate some level of correlation between EEG and EMG. With a view towards detecting patients' intention levels during rehabilitation therapies, the proposed approach has the potential to ensure active involvement of patients throughout exercise routines and increase the efficacy of robot assisted therapies
Preference Analysis Method Applying Relationship between Electroencephalogram Activities and Egogram in Prefrontal Cortex Activities : How to collaborate between engineering techniques and psychology
This paper introduces a method of preference analysis based on electroencephalogram (EEG) analysis of prefrontal cortex activity. The proposed method applies the relationship between EEG activity and the Egogram. The EEG senses a single point and records readings by means of a dry-type sensor and a small number of electrodes. The EEG analysis adapts the feature mining and the clustering on EEG patterns using a self-organizing map (SOM). EEG activity of the prefrontal cortex displays individual difference. To take the individual difference into account, we construct a feature vector for input modality of the SOM. The input vector for the SOM consists of the extracted EEG feature vector and a human character vector, which is the human character quantified through the ego analysis using psychological testing. In preprocessing, we extract the EEG feature vector by calculating the time average on each frequency band: θ, low-β, and high-β. To prove the effectiveness of the proposed method, we perform experiments using real EEG data. These results show that the accuracy rate of the EEG pattern classification is higher than it was before the improvement of the input vector
A Dual-Modality Emotion Recognition System of EEG and Facial Images and its Application in Educational Scene
With the development of computer science, people's interactions with computers or through computers have become more frequent. Some human-computer interactions or human-to-human interactions that are often seen in daily life: online chat, online banking services, facial recognition functions, etc. Only through text messaging, however, can the effect of information transfer be reduced to around 30% of the original. Communication becomes truly efficient when we can see one other's reactions and feel each other's emotions.
This issue is especially noticeable in the educational field. Offline teaching is a classic teaching style in which teachers may determine a student's present emotional state based on their expressions and alter teaching methods accordingly. With the advancement of computers and the impact of Covid-19, an increasing number of schools and educational institutions are exploring employing online or video-based instruction. In such circumstances, it is difficult for teachers to get feedback from students. Therefore, an emotion recognition method is proposed in this thesis that can be used for educational scenarios, which can help teachers quantify the emotional state of students in class and be used to guide teachers in exploring or adjusting teaching methods.
Text, physiological signals, gestures, facial photographs, and other data types are commonly used for emotion recognition. Data collection for facial images emotion recognition is particularly convenient and fast among them, although there is a problem that people may subjectively conceal true emotions, resulting in inaccurate recognition results. Emotion recognition based on EEG waves can compensate for this drawback. Taking into account the aforementioned issues, this thesis first employs the SVM-PCA to classify emotions in EEG data, then employs the deep-CNN to classify the emotions of the subject's facial images. Finally, the D-S evidence theory is used for fusing and analyzing the two classification results and obtains the final emotion recognition accuracy of 92%. The specific research content of this thesis is as follows:
1) The background of emotion recognition systems used in teaching scenarios is discussed, as well as the use of various single modality systems for emotion recognition.
2) Detailed analysis of EEG emotion recognition based on SVM. The theory of EEG signal generation, frequency band characteristics, and emotional dimensions is introduced. The EEG signal is first filtered and processed with artifact removal. The processed EEG signal is then used for feature extraction using wavelet transforms. It is finally fed into the proposed SVM-PCA for emotion recognition and the accuracy is 64%.
3) Using the proposed deep-CNN to recognize emotions in facial images. Firstly, the Adaboost algorithm is used to detect and intercept the face area in the image, and the gray level balance is performed on the captured image. Then the preprocessed images are trained and tested using the deep-CNN, and the average accuracy is 88%.
4) Fusion method based on decision-making layer. The data fusion at the decision level is carried out with the results of EEG emotion recognition and facial expression emotion recognition. The final dual-modality emotion recognition results and system accuracy of 92% are obtained using D-S evidence theory.
5) The dual-modality emotion recognition system's data collection approach is designed. Based on the process, the actual data in the educational scene is collected and analyzed. The final accuracy of the dual-modality system is 82%. Teachers can use the emotion recognition results as a guide and reference to improve their teaching efficacy
EEG-Fest: Few-shot based Attention Network for Driver's Vigilance Estimation with EEG Signals
A lack of driver's vigilance is the main cause of most vehicle crashes.
Electroencephalography(EEG) has been reliable and efficient tool for drivers'
drowsiness estimation. Even though previous studies have developed accurate and
robust driver's vigilance detection algorithms, these methods are still facing
challenges on following areas: (a) small sample size training, (b) anomaly
signal detection, and (c) subject-independent classification. In this paper, we
propose a generalized few-shot model, namely EEG-Fest, to improve
aforementioned drawbacks. The EEG-Fest model can (a) classify the query
sample's drowsiness with a few samples, (b) identify whether a query sample is
anomaly signals or not, and (c) achieve subject independent classification. The
proposed algorithm achieves state-of-the-art results on the SEED-VIG dataset
and the SADT dataset. The accuracy of the drowsy class achieves 92% and 94% for
1-shot and 5-shot support samples in the SEED-VIG dataset, and 62% and 78% for
1-shot and 5-shot support samples in the SADT dataset.Comment: Submitted to peer review journal for revie
EEG analysis for understanding stress based on affective model basis function
Coping with stress has shown to be able to avoid many
complications in medical condition. In this paper we
present an alternative method in analyzing and
understanding stress using the four basic emotions of
happy, calm, sad and fear as our basis function.
Electroencephalogram (EEG) signals were captured from
the scalp of the brain and measured in responds to various
stimuli from the four basic emotions to stimulating stress
base on the IAPS emotion stimuli. Features from the EEG
signals were extracted using the Kernel Density
Estimation (KDE) and classified using the Multilayer
Perceptron (MLP), a neural network classifier to obtain
accuracy of the subject’s emotion leading to stress.
Results have shown the potential of using the basic
emotion basis function to visualize the stress perception
as an alternative tool for engineers and psychologist.
Keywords: Electroencephalography (EEG),
Kernel Density Estimation (KDE), Multi-layer Perceptron
(MLP), Valance (V), Arousal (A
Deep-seeded Clustering for Unsupervised Valence-Arousal Emotion Recognition from Physiological Signals
Emotions play a significant role in the cognitive processes of the human
brain, such as decision making, learning and perception. The use of
physiological signals has shown to lead to more objective, reliable and
accurate emotion recognition combined with raising machine learning methods.
Supervised learning methods have dominated the attention of the research
community, but the challenge in collecting needed labels makes emotion
recognition difficult in large-scale semi- or uncontrolled experiments.
Unsupervised methods are increasingly being explored, however sub-optimal
signal feature selection and label identification challenges unsupervised
methods' accuracy and applicability. This article proposes an unsupervised deep
cluster framework for emotion recognition from physiological and psychological
data. Tests on the open benchmark data set WESAD show that deep k-means and
deep c-means distinguish the four quadrants of Russell's circumplex model of
affect with an overall accuracy of 87%. Seeding the clusters with the subject's
subjective assessments helps to circumvent the need for labels.Comment: 7 pages, 1 figure, 2 table
Multimodal Emotion Recognition Model using Physiological Signals
As an important field of research in Human-Machine Interactions, emotion
recognition based on physiological signals has become research hotspots.
Motivated by the outstanding performance of deep learning approaches in
recognition tasks, we proposed a Multimodal Emotion Recognition Model that
consists of a 3D convolutional neural network model, a 1D convolutional neural
network model and a biologically inspired multimodal fusion model which
integrates multimodal information on the decision level for emotion
recognition. We use this model to classify four emotional regions from the
arousal valence plane, i.e., low arousal and low valence (LALV), high arousal
and low valence (HALV), low arousal and high valence (LAHV) and high arousal
and high valence (HAHV) in the DEAP and AMIGOS dataset. The 3D CNN model and 1D
CNN model are used for emotion recognition based on electroencephalogram (EEG)
signals and peripheral physiological signals respectively, and get the accuracy
of 93.53% and 95.86% with the original EEG signals in these two datasets.
Compared with the single-modal recognition, the multimodal fusion model
improves the accuracy of emotion recognition by 5% ~ 25%, and the fusion result
of EEG signals (decomposed into four frequency bands) and peripheral
physiological signals get the accuracy of 95.77%, 97.27% and 91.07%, 99.74% in
these two datasets respectively. Integrated EEG signals and peripheral
physiological signals, this model could reach the highest accuracy about 99% in
both datasets which shows that our proposed method demonstrates certain
advantages in solving the emotion recognition tasks.Comment: 10 pages, 10 figures, 6 table
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
- …