3 research outputs found
EEG-based image classification using an efficient geometric deep network based on functional connectivity
To ensure that the FC-GDN is properly calibrated for the EEG-ImageNet dataset, we subject it to extensive training and gather all of the relevant weights for its parameters. Making use of the FC-GDN pseudo-code. The dataset is split into a "train" and "test" section in Kfold cross-validation. Ten-fold recommends using ten folds, with one fold being selected as the test split at each iteration. This divides the dataset into 90% training data and 10% test data. In order to train all 10 folds without overfitting, it is necessary to apply this procedure repeatedly throughout the whole dataset. Each training fold is arrived at after several iterations. After training all ten folds, results are analyzed. For each iteration, the FC-GDN weights are optimized by the SGD and ADAM optimizers. The ideal network design parameters are based on the convergence of the trains and the precision of the tests. This study offers a novel geometric deep learning-based network architecture for classifying visual stimulation categories using electroencephalogram (EEG) data from human participants while they watched various sorts of images. The primary goals of this study are to (1) eliminate feature extraction from GDL-based approaches and (2) extract brain states via functional connectivity. Tests with the EEG-ImageNet database validate the suggested method's efficacy. FC-GDN is more efficient than other cutting-edge approaches for boosting classification accuracy, requiring fewer iterations. In computational neuroscience, neural decoding addresses the problem of mind-reading. Because of its simplicity of use and temporal precision, Electroencephalographys (EEG) are commonly employed to monitor brain activity. Deep neural networks provide a variety of ways to detecting brain activity. Using a Function Connectivity (FC) - Geometric Deep Network (GDN) and EEG channel functional connectivity, this work directly recovers hidden states from high-resolution temporal data. The time samples taken from each channel are utilized to represent graph signals on a topological connection network based on EEG channel functional connectivity. A novel graph neural network architecture evaluates users' visual perception state utilizing extracted EEG patterns associated to various picture categories using graphically rendered EEG recordings as training data. The efficient graph representation of EEG signals serves as the foundation for this design. Proposal for an FC-GDN EEG-ImageNet test. Each category has a maximum of 50 samples. Nine separate EEG recorders were used to obtain these images. The FC-GDN approach yields 99.4% accuracy, which is 0.1% higher than the most sophisticated method presently availabl
Recommended from our members
Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network
Data Availability Statement: The EEG-ImageNet dataset used in this study is publicly available in this address: https://tinyurl.com/eeg-visual-classification (accessed on 10 October 2022).Copyright © 2022 by the authors. Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals.This research received no external funding
Object Extraction in Cluttered Environments via a P300-Based IFCE
One of the fundamental issues for robot navigation is to extract an object of interest from an image. The biggest challenges for extracting objects of interest are how to use a machine to model the objects in which a human is interested and extract them quickly and reliably under varying illumination conditions. This article develops a novel method for segmenting an object of interest in a cluttered environment by combining a P300-based brain computer interface (BCI) and an improved fuzzy color extractor (IFCE). The induced P300 potential identifies the corresponding region of interest and obtains the target of interest for the IFCE. The classification results not only represent the human mind but also deliver the associated seed pixel and fuzzy parameters to extract the specific objects in which the human is interested. Then, the IFCE is used to extract the corresponding objects. The results show that the IFCE delivers better performance than the BP network or the traditional FCE. The use of a P300-based IFCE provides a reliable solution for assisting a computer in identifying an object of interest within images taken under varying illumination intensities.</p