2 research outputs found

    RLEEGNet: Integrating Brain-Computer Interfaces with Adaptive AI for Intuitive Responsiveness and High-Accuracy Motor Imagery Classification

    Full text link
    Current approaches to prosthetic control are limited by their reliance on traditional methods, which lack real-time adaptability and intuitive responsiveness. These limitations are particularly pronounced in assistive technologies designed for individuals with diverse cognitive states and motor intentions. In this paper, we introduce a framework that leverages Reinforcement Learning (RL) with Deep Q-Networks (DQN) for classification tasks. Additionally, we present a preprocessing technique using the Common Spatial Pattern (CSP) for multiclass motor imagery (MI) classification in a One-Versus-The-Rest (OVR) manner. The subsequent 'csp space' transformation retains the temporal dimension of EEG signals, crucial for extracting discriminative features. The integration of DQN with a 1D-CNN-LSTM architecture optimizes the decision-making process in real-time, thereby enhancing the system's adaptability to the user's evolving needs and intentions. We elaborate on the data processing methods for two EEG motor imagery datasets. Our innovative model, RLEEGNet, incorporates a 1D-CNN-LSTM architecture as the Online Q-Network within the DQN, facilitating continuous adaptation and optimization of control strategies through feedback. This mechanism allows the system to learn optimal actions through trial and error, progressively improving its performance. RLEEGNet demonstrates high accuracy in classifying MI-EEG signals, achieving as high as 100% accuracy in MI tasks across both the GigaScience (3-class) and BCI-IV-2a (4-class) datasets. These results highlight the potential of combining DQN with a 1D-CNN-LSTM architecture to significantly enhance the adaptability and responsiveness of BCI systems.Comment: 23 pages, 1 figure, 6 table

    The classification of wink-based eeg signals by means of transfer learning models

    Get PDF
    Stroke is one of the dominant causes of impairme nt. An estimation of half post-stroke survivors suffer from a severe motor or cognitive deterioration, that affects the functionality of the affected parts of the body, which in turn, prevents the patients from carrying out Activities of Daily Living (ADL). EEG signals which contains information on the activities carried out by a human that is widely used in many applications of BCI technologies which offers a means of controlling exoskeletons or automated orthosis to facilitate their ADL. Although motor imagery signals have been used in assisting the hand grasping motion amongst others motions, nonetheless, such signals are often difficult to be generated. It is non-trivial to note that EEG-based signals for instance, winking could mitigate the aforesaid issue. Nevertheless, extracting and attaining significant features from EEG signals are also somewhat challenging. The utilization of deep learning, particularly Transfer Learning (TL), have been demonstrated in the literature to b e able to provide seamless extraction of such signals in a myria d of various applications. Hitherto, limited studies have investigated the classification of wink-based EEG signals through TL accompanied by classical Machine Learning (ML) pipelines. This study aimed to explore the performance of different pre-processing methods, namely Fast Fourier Transform, Short-Time Fourier Transform, Discrete Wavelet Transform, and Continuous Wavelet Transform (CWT) that could allow TL models to extract features from the images generated and classify through selected classical ML algorithms . These pre-processing methods were utilized to convert the digital signals into respective images of all the right and left winking EEG signals along with no winking signals that were collected from ten (6 males and 4 females, aged between 22 and 29) subjects. The implementation of pre-processing algorithms has been demonstrated to be able to mitigate the signal noises that arises from the winking signals without the need for the use signal filtering algorithms. A new form of input which consists of scalogram and spectrogram images that represents both time and frequency domains , are then introduced in the classification of wink-based EEG signals. Different TL models were exploited to extract features from the transformed EEG signals. The features extracted were then classified through three classical ML models, namely Support Vector Machine, k -Nearest Neighbour (k-NN) and Random Forest to determine the best pipeline for wink -based EEG signals. The hyperparameters of the ML models were tuned through a 5-fold crossvalidation technique via an exhaustive grid search approach. The training, validation and testing of the models were split with a stratified ratio of 60:20:20, respectively. The results obtained from the TL-ML pipelines were evaluated in terms of classification accuracy, Precision, Recall, F1-Score and confusion matrix. It was demonstrated from the simulation investigation that the CWT model could yield a better signal transformation amongst the preprocessing algorithms. In addition, amongst the eighteen TL models evaluated based on the CWT transformation, fourteen was f ound to be able to extract the features reasonable, i.e., VGG16, VGG19, ResNet101, ResNet101 V2, ResNet152, ResNet152 V2, Inception V3, Inception ResNet V2, Xception, MobileNetV2, DenseNet 121, DenseNet 169, NasNetMobile and NasNetLarge. Whilst it was observed that the optimized k-NN model based on the aforesaid pipeline could achieve a classification accuracy of 100% for the training, validation, and tes t data. Nonetheless, upon carrying out a robustness test on new data, it was demonstrated that the CWT-NasNetMobile-kNN pipeline yielded the best performance. Therefore, it could be concluded that the proposed CWT-NasNetMobile-k-NN pipeline is suitable to be adopted to classify -winkbased EEG signals for BCI applications,for instance a grasping exoskeleton
    corecore