1,794 research outputs found

    AUTOMATED ARTIFACT REMOVAL AND DETECTION OF MILD COGNITIVE IMPAIRMENT FROM SINGLE CHANNEL ELECTROENCEPHALOGRAPHY SIGNALS FOR REAL-TIME IMPLEMENTATIONS ON WEARABLES

    Get PDF
    Electroencephalogram (EEG) is a technique for recording asynchronous activation of neuronal firing inside the brain with non-invasive scalp electrodes. EEG signal is well studied to evaluate the cognitive state, detect brain diseases such as epilepsy, dementia, coma, autism spectral disorder (ASD), etc. In this dissertation, the EEG signal is studied for the early detection of the Mild Cognitive Impairment (MCI). MCI is the preliminary stage of Dementia that may ultimately lead to Alzheimers disease (AD) in the elderly people. Our goal is to develop a minimalistic MCI detection system that could be integrated to the wearable sensors. This contribution has three major aspects: 1) cleaning the EEG signal, 2) detecting MCI, and 3) predicting the severity of the MCI using the data obtained from a single-channel EEG electrode. Artifacts such as eye blink activities can corrupt the EEG signals. We investigate unsupervised and effective removal of ocular artifact (OA) from single-channel streaming raw EEG data. Wavelet transform (WT) decomposition technique was systematically evaluated for effectiveness of OA removal for a single-channel EEG system. Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), is studied with four WT basis functions: haar, coif3, sym3, and bior4.4. The performance of the artifact removal algorithm was evaluated by the correlation coefficients (CC), mutual information (MI), signal to artifact ratio (SAR), normalized mean square error (NMSE), and time-frequency analysis. It is demonstrated that WT can be an effective tool for unsupervised OA removal from single channel EEG data for real-time applications.For the MCI detection from the clean EEG data, we collected the scalp EEG data, while the subjects were stimulated with five auditory speech signals. We extracted 590 features from the Event-Related Potential (ERP) of the collected EEG signals, which included time and spectral domain characteristics of the response. The top 25 features, ranked by the random forest method, were used for classification models to identify subjects with MCI. Robustness of our model was tested using leave-one-out cross-validation while training the classifiers. Best results (leave-one-out cross-validation accuracy 87.9%, sensitivity 84.8%, specificity 95%, and F score 85%) were obtained using support vector machine (SVM) method with Radial Basis Kernel (RBF) (sigma = 10, cost = 102). Similar performances were also observed with logistic regression (LR), further validating the results. Our results suggest that single-channel EEG could provide a robust biomarker for early detection of MCI. We also developed a single channel Electro-encephalography (EEG) based MCI severity monitoring algorithm by generating the Montreal Cognitive Assessment (MoCA) scores from the features extracted from EEG. We performed multi-trial and single-trail analysis for the algorithm development of the MCI severity monitoring. We studied Multivariate Regression (MR), Ensemble Regression (ER), Support Vector Regression (SVR), and Ridge Regression (RR) for multi-trial and deep neural regression for the single-trial analysis. In the case of multi-trial, the best result was obtained from the ER. In our single-trial analysis, we constructed the time-frequency image from each trial and feed it to the convolutional deep neural network (CNN). Performance of the regression models was evaluated by the RMSE and the residual analysis. We obtained the best accuracy with the deep neural regression method

    Noise Reduction of EEG Signals Using Autoencoders Built Upon GRU based RNN Layers

    Get PDF
    Understanding the cognitive and functional behaviour of the brain by its electrical activity is an important area of research. Electroencephalography (EEG) is a method that measures and record electrical activities of the brain from the scalp. It has been used for pathology analysis, emotion recognition, clinical and cognitive research, diagnosing various neurological and psychiatric disorders and for other applications. Since the EEG signals are sensitive to activities other than the brain ones, such as eye blinking, eye movement, head movement, etc., it is not possible to record EEG signals without any noise. Thus, it is very important to use an efficient noise reduction technique to get more accurate recordings. Numerous traditional techniques such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), wavelet transformations and machine learning techniques were proposed for reducing the noise in EEG signals. The aim of this paper is to investigate the effectiveness of stacked autoencoders built upon Gated Recurrent Unit (GRU) based Recurrent Neural Network (RNN) layers (GRU-AE) against PCA. To achieve this, Harrell-Davis decile values for the reconstructed signals’ signal-to- noise ratio distributions were compared and it was found that the GRU-AE outperformed PCA for noise reduction of EEG signals

    A multi-artifact EEG denoising by frequency-based deep learning

    Full text link
    Electroencephalographic (EEG) signals are fundamental to neuroscience research and clinical applications such as brain-computer interfaces and neurological disorder diagnosis. These signals are typically a combination of neurological activity and noise, originating from various sources, including physiological artifacts like ocular and muscular movements. Under this setting, we tackle the challenge of distinguishing neurological activity from noise-related sources. We develop a novel EEG denoising model that operates in the frequency domain, leveraging prior knowledge about noise spectral features to adaptively compute optimal convolutional filters for noise separation. The model is trained to learn an empirical relationship connecting the spectral characteristics of noise and noisy signal to a non-linear transformation which allows signal denoising. Performance evaluation on the EEGdenoiseNet dataset shows that the proposed model achieves optimal results according to both temporal and spectral metrics. The model is found to remove physiological artifacts from input EEG data, thus achieving effective EEG denoising. Indeed, the model performance either matches or outperforms that achieved by benchmark models, proving to effectively remove both muscle and ocular artifacts without the need to perform any training on the particular type of artifact.Comment: Accepted at the Italian Workshop on Artificial Intelligence for Human-Machine Interaction (AIxHMI 2023), November 06, 2023, Rome, Ital

    Explainable deep learning solutions for the artifacts correction of EEG signals

    Get PDF
    L'attività celebrale può essere acquisita tramite elettroencefalografia (EEG) con degli elettrodi posti sullo scalpo del soggetto. Quando un segnale EEG viene acquisito si formano degli artefatti dovuti a: movimenti dei muscoli, movimenti degli occhi, attività del cuore o dovuti all'apparecchio di acquisizione stesso. Questi artefatti possono notevolmente compromettere la qualità dei segnali EEG. La rimozione di questi artefatti è fondamentale per molte discipline per ottenere un segnale pulito e poterlo utilizzare nel migli0re dei modi. Il machine learning (ML) è un esempio di tecnica che può essere utilizzata per classificare e rimuovere gli artefatti dai segnali EEG. Il deep learning (DL) è una branca del ML che è sviluppata ispirandosi all'architettura della corteccia cerebrale umana. Il DL è alla base della creazione dell'intelligenza artificiale e della costruzione di reti neurali (NN). Nella tesi applicheremo ICLabel che è una rete neurale che classifica le componenti indipendenti (IC), ottenute con la scomposizione tramite independent component analysis (ICA), in sette classi differenti: brain, eye, muscle, heart, channel noise, line noise e other. ICLabel calcola la probabilità che le ICs appartengano a ciascuna di queste sette classi. Durante questo lavoro di tesi abbiamo sviluppato una semplice rete neurale, simile a quella di ICLabel, che classifica le ICs in due classi: una contenente le ICs che corrispondono a quelli che sono i segnali base dell'attività cerebrale, l'altra invece contenente le ICs che non appartengono a questi segnali base. Abbiamo creato questa rete neurale per poter applicare poi un algoritmo di explainability (basato sulle reti neurali), chiamato GradCAM. Abbiamo, poi, comparato le performances di ICLabel e della rete neurale da noi sviluppata per vedere le differenze dal punto di vista della accuratezza e della precisione nella classificazione, come descritto nel capitolo. Abbiamo infine applicato GradCAM alla rete neurale da noi sviluppata per capire quali parti del segnale la rete usa per compiere le classificazioni, evidenziando sugli spettrogrammi delle ICs le parti più importanti del segnale. Possiamo dire poi, che come ci aspettavamo la CNN è guidata da componenti come quelle del line noise (che corrisponde alla frequenza di 50 Hz e armoniche più alte) per identificare le componenti non brain, mentre si concentra sul range da 1-30 Hz per identificare quelle brain. Anche se promettenti questi risultati vannno investigati. Inoltre GradCAM potrebbe essere applicato anche su ICLabel per spiegare la sua struttura più complessa.The brain electrical activity can be acquired via electroencephalography (EEG) with electrodes placed on the scalp of the individual. When EEG signals are recorded, signal artifacts such as muscular activities, blinking of eyes, and power line electrical noise can significantly affect the quality of the EEG signals. Machine learning (ML) techniques are an example of method used to classify and remove EEG artifacts. Deep learning is a type of ML inspired by the architecture of the cerebral cortex, that is formed by a dense network of neurons, simple processing units in our brain. In this thesis work we use ICLabel that is an artificial neural network developed by EEGLAB to automatically classify, that classifies the inidpendent component(ICs), obtained by the application of the independent component analysis (ICA), in seven classes, i.e., brain, eye, muscle, heart, channel noise, line noise, other. ICLabel provides the probability that each IC features belongs to one out of 6 artefact classes, or it is a pure brain component. We create a simple CNN similar to the ICLabel's one that classifies the EEG artifacts ICs in two classes, brain and not brain. and we added an explainability tool, i.e., GradCAM, to investigate how the algorithm is able to successfully classify the ICs. We compared the performances f our simple CNN versus those of ICLabel, finding that CNN is able to reach satisfactory accuracies (over two classes, i.e., brain/non-brain). Then we applied GradCAM to the CNN to understand what are the most important parts of the spectrogram that the network used to classify the data and we could speculate that, as expected, the CNN is driven by components such as the power line noise (50 Hz and higher harmonics) to identify non-brain components, while it focuses on the range 1-30 Hz to identify brain components. Although promising, these results need further investigations. Moreover, GradCAM could be later applied to ICLabel, too, in order to explain the more sophisticated DL model with 7 classes

    A Dual-Modality Emotion Recognition System of EEG and Facial Images and its Application in Educational Scene

    Get PDF
    With the development of computer science, people's interactions with computers or through computers have become more frequent. Some human-computer interactions or human-to-human interactions that are often seen in daily life: online chat, online banking services, facial recognition functions, etc. Only through text messaging, however, can the effect of information transfer be reduced to around 30% of the original. Communication becomes truly efficient when we can see one other's reactions and feel each other's emotions. This issue is especially noticeable in the educational field. Offline teaching is a classic teaching style in which teachers may determine a student's present emotional state based on their expressions and alter teaching methods accordingly. With the advancement of computers and the impact of Covid-19, an increasing number of schools and educational institutions are exploring employing online or video-based instruction. In such circumstances, it is difficult for teachers to get feedback from students. Therefore, an emotion recognition method is proposed in this thesis that can be used for educational scenarios, which can help teachers quantify the emotional state of students in class and be used to guide teachers in exploring or adjusting teaching methods. Text, physiological signals, gestures, facial photographs, and other data types are commonly used for emotion recognition. Data collection for facial images emotion recognition is particularly convenient and fast among them, although there is a problem that people may subjectively conceal true emotions, resulting in inaccurate recognition results. Emotion recognition based on EEG waves can compensate for this drawback. Taking into account the aforementioned issues, this thesis first employs the SVM-PCA to classify emotions in EEG data, then employs the deep-CNN to classify the emotions of the subject's facial images. Finally, the D-S evidence theory is used for fusing and analyzing the two classification results and obtains the final emotion recognition accuracy of 92%. The specific research content of this thesis is as follows: 1) The background of emotion recognition systems used in teaching scenarios is discussed, as well as the use of various single modality systems for emotion recognition. 2) Detailed analysis of EEG emotion recognition based on SVM. The theory of EEG signal generation, frequency band characteristics, and emotional dimensions is introduced. The EEG signal is first filtered and processed with artifact removal. The processed EEG signal is then used for feature extraction using wavelet transforms. It is finally fed into the proposed SVM-PCA for emotion recognition and the accuracy is 64%. 3) Using the proposed deep-CNN to recognize emotions in facial images. Firstly, the Adaboost algorithm is used to detect and intercept the face area in the image, and the gray level balance is performed on the captured image. Then the preprocessed images are trained and tested using the deep-CNN, and the average accuracy is 88%. 4) Fusion method based on decision-making layer. The data fusion at the decision level is carried out with the results of EEG emotion recognition and facial expression emotion recognition. The final dual-modality emotion recognition results and system accuracy of 92% are obtained using D-S evidence theory. 5) The dual-modality emotion recognition system's data collection approach is designed. Based on the process, the actual data in the educational scene is collected and analyzed. The final accuracy of the dual-modality system is 82%. Teachers can use the emotion recognition results as a guide and reference to improve their teaching efficacy

    Review of Artifact Rejection Methods for Electroencephalographic Systems

    Get PDF
    Technologies using electroencephalographic (EEG) signals have been penetrated into public by the development of EEG systems. During EEG system operation, recordings ought to be obtained under no restriction of movement for routine use in the real world. However, the lack of consideration of situational behavior constraints will cause technical/biological artifacts that often mixed with EEG signals and make the signal processing difficult in all respects by ingeniously disguising themselves as EEG components. EEG systems integrating gold standard or specialized device in their processing strategies would appear as daily tools in the future if they are unperturbed to such obstructions. In this chapter, we describe algorithms for artifact rejection in multi-/single-channel. In particular, some existing single-channel artifact rejection methods that will exhibit beneficial information to improve their performance in online EEG systems were summarized by focusing on the advantages and disadvantages of algorithms
    corecore