84 research outputs found

    Home Automation Using SSVEP & Eye-Blink Detection Based Brain-Computer Interface

    Full text link
    In this paper, we present a novel brain computer interface based home automation system using two responses - Steady State Visually Evoked Potential (SSVEP) and the eye-blink artifact, which is augmented by a Bluetooth based indoor localization system, to greatly increase the number of controllable devices. The hardware implementation of this system to control a table lamp and table fan using brain signals has also been discussed and state-of-the-art results have been achieved.Comment: 2 pages, 1 table, published at IEEE SMC 201

    Signal Processing Using Non-invasive Physiological Sensors

    Get PDF
    Non-invasive biomedical sensors for monitoring physiological parameters from the human body for potential future therapies and healthcare solutions. Today, a critical factor in providing a cost-effective healthcare system is improving patients' quality of life and mobility, which can be achieved by developing non-invasive sensor systems, which can then be deployed in point of care, used at home or integrated into wearable devices for long-term data collection. Another factor that plays an integral part in a cost-effective healthcare system is the signal processing of the data recorded with non-invasive biomedical sensors. In this book, we aimed to attract researchers who are interested in the application of signal processing methods to different biomedical signals, such as an electroencephalogram (EEG), electromyogram (EMG), functional near-infrared spectroscopy (fNIRS), electrocardiogram (ECG), galvanic skin response, pulse oximetry, photoplethysmogram (PPG), etc. We encouraged new signal processing methods or the use of existing signal processing methods for its novel application in physiological signals to help healthcare providers make better decisions

    A hybrid environment control system combining EMG and SSVEP signal based on brain-computer interface technology

    Get PDF
    The patients who are impaired with neurodegenerative disorders cannot command their muscles through the neural pathways. These patients are given an alternative from their neural path through Brain-Computer Interface (BCI) systems, which are the explicit use of brain impulses without any need for a computer's vocal muscle. Nowadays, the steady-state visual evoked potential (SSVEP) modality offers a robust communication pathway to introduce a non-invasive BCI. There are some crucial constituents, including window length of SSVEP response, the number of electrodes in the acquisition device and system accuracy, which are the critical performance components in any BCI system based on SSVEP signal. In this study, a real-time hybrid BCI system consists of SSVEP and EMG has been proposed for the environmental control system. The feature in terms of the common spatial pattern (CSP) has been extracted from four classes of SSVEP response, and extracted feature has been classified using K-nearest neighbors (k-NN) based classification algorithm. The obtained classification accuracy of eight participants was 97.41%. Finally, a control mechanism that aims to apply for the environmental control system has also been developed. The proposed system can identify 18 commands (i.e., 16 control commands using SSVEP and two commands using EMG). This result represents very encouraging performance to handle real-time SSVEP based BCI system consists of a small number of electrodes. The proposed framework can offer a convenient user interface and a reliable control method for realistic BCI technology

    Study of non-invasive cognitive tasks and feature extraction techniques for brain-computer interface (BCI) applications

    Get PDF
    A brain-computer interface (BCI) provides an important alternative for disabled people that enables the non-muscular communication pathway among individual thoughts and different assistive appliances. A BCI technology essentially consists of data acquisition, pre-processing, feature extraction, classification and device command. Indeed, despite the valuable and promising achievements already obtained in every component of BCI, the BCI field is still a relatively young research field and there is still much to do in order to make BCI become a mature technology. To mitigate the impediments concerning BCI, the study of cognitive task together with the EEG feature and classification framework have been investigated. There are four distinct experiments have been conducted to determine the optimum solution to those specific issues. In the first experiment, three cognitive tasks namely quick math solving, relaxed and playing games have been investigated. The features have been extracted using power spectral density (PSD), logenergy entropy, and spectral centroid and the extracted feature has been classified through the support vector machine (SVM), K-nearest neighbor (K-NN), and linear discriminant analysis (LDA). In this experiment, the best classification accuracy for single channel and five channel datasets were 86% and 91.66% respectively that have been obtained by the PSD-SVM approach. The wink based facial expressions namely left wink, right wink and no wink have been studied through fast Fourier transform (FFT) and sample range feature and then the extracted features have been classified using SVM, K-NN, and LDA. The best accuracy (98.6%) has been achieved by the sample range-SVM based approach. The eye blinking based facial expression has been investigated following the same methodology as the study of wink based facial expression. Moreover, the peak detection approach has also been employed to compute the number of blinks. The optimum accuracy of 99% has been achieved using the peak detection approach. Additionally, twoclass motor imagery hand movement has been classified using SVM, K-NN, and LDA where the feature has been extracted through PSD, spectral centroid and continuous wavelet transform (CWT). The optimum 74.7% accuracy has been achieved by the PSDSVM approach. Finally, two device command prototypes have been designed to translate the classifier output. One prototype can translate four types of cognitive tasks in terms of 5 watts four different colored bulbs, whereas, another prototype may able to control DC motor utilizing cognitive tasks. This study has delineated the implementation of every BCI component to facilitate the application of brainwave assisted assistive appliances. Finally, this thesis comes to the end by drawing the future direction regarding the current issues of BCI technology and these directions may significantly enhance usability for the implementation of commercial applications not only for the disabled but also for a significant number of healthy users

    Machine Learning-Based Classification of Hybrid BCI Signals using Mayfly-Optimized Multiclass Weighted Random Forest

    Get PDF
    The Brain-Computer Interface (BCI) technologies have excellent clinical and non-clinical uses. Among the most popular imaging methods adopted in BCI technologies is electroencephalography (EEG). But EEG signals are typically quite complicated, so analyzing them necessitates a significant amount of effort. With the help of machine learning (ML), this research investigates the feasibility of a BCI platform based on the motor imagery (MI) concept. The steps of pre-processing, feature extraction and classification are the underpinning of any conventional ML model. To train such a model, however, a large amount of data is needed. To address this gap, this work introduces a new mayfly-optimized multiclass weighted random forest (MFO-MWRF) technique that uses retrieved features as input to mitigate the need for this supplementary data. In this study, we gather a dataset of hybrid EEG and fNIRS motor imagery that can be pre-processed using a Wiener filter (WF) to filter out noisier signals without affecting the high-quality images. The characteristics are extracted using the discrete wavelet transform (DWT). The research results indicate that the proposed approach achieves the best performance compared to existing approaches for classifying motor movement images

    Brain-computer interface for robot control with eye artifacts for assistive applications

    Get PDF
    Human-robot interaction is a rapidly developing field and robots have been taking more active roles in our daily lives. Patient care is one of the fields in which robots are becoming more present, especially for people with disabilities. People with neurodegenerative disorders might not consciously or voluntarily produce movements other than those involving the eyes or eyelids. In this context, Brain-Computer Interface (BCI) systems present an alternative way to communicate or interact with the external world. In order to improve the lives of people with disabilities, this paper presents a novel BCI to control an assistive robot with user's eye artifacts. In this study, eye artifacts that contaminate the electroencephalogram (EEG) signals are considered a valuable source of information thanks to their high signal-to-noise ratio and intentional generation. The proposed methodology detects eye artifacts from EEG signals through characteristic shapes that occur during the events. The lateral movements are distinguished by their ordered peak and valley formation and the opposite phase of the signals measured at F7 and F8 channels. This work, as far as the authors' knowledge, is the first method that used this behavior to detect lateral eye movements. For the blinks detection, a double-thresholding method is proposed by the authors to catch both weak blinks as well as regular ones, differentiating itself from the other algorithms in the literature that normally use only one threshold. Real-time detected events with their virtual time stamps are fed into a second algorithm, to further distinguish between double and quadruple blinks from single blinks occurrence frequency. After testing the algorithm offline and in realtime, the algorithm is implemented on the device. The created BCI was used to control an assistive robot through a graphical user interface. The validation experiments including 5 participants prove that the developed BCI is able to control the robot
    corecore