42 research outputs found

    Study of non-invasive cognitive tasks and feature extraction techniques for brain-computer interface (BCI) applications

    Get PDF
    A brain-computer interface (BCI) provides an important alternative for disabled people that enables the non-muscular communication pathway among individual thoughts and different assistive appliances. A BCI technology essentially consists of data acquisition, pre-processing, feature extraction, classification and device command. Indeed, despite the valuable and promising achievements already obtained in every component of BCI, the BCI field is still a relatively young research field and there is still much to do in order to make BCI become a mature technology. To mitigate the impediments concerning BCI, the study of cognitive task together with the EEG feature and classification framework have been investigated. There are four distinct experiments have been conducted to determine the optimum solution to those specific issues. In the first experiment, three cognitive tasks namely quick math solving, relaxed and playing games have been investigated. The features have been extracted using power spectral density (PSD), logenergy entropy, and spectral centroid and the extracted feature has been classified through the support vector machine (SVM), K-nearest neighbor (K-NN), and linear discriminant analysis (LDA). In this experiment, the best classification accuracy for single channel and five channel datasets were 86% and 91.66% respectively that have been obtained by the PSD-SVM approach. The wink based facial expressions namely left wink, right wink and no wink have been studied through fast Fourier transform (FFT) and sample range feature and then the extracted features have been classified using SVM, K-NN, and LDA. The best accuracy (98.6%) has been achieved by the sample range-SVM based approach. The eye blinking based facial expression has been investigated following the same methodology as the study of wink based facial expression. Moreover, the peak detection approach has also been employed to compute the number of blinks. The optimum accuracy of 99% has been achieved using the peak detection approach. Additionally, twoclass motor imagery hand movement has been classified using SVM, K-NN, and LDA where the feature has been extracted through PSD, spectral centroid and continuous wavelet transform (CWT). The optimum 74.7% accuracy has been achieved by the PSDSVM approach. Finally, two device command prototypes have been designed to translate the classifier output. One prototype can translate four types of cognitive tasks in terms of 5 watts four different colored bulbs, whereas, another prototype may able to control DC motor utilizing cognitive tasks. This study has delineated the implementation of every BCI component to facilitate the application of brainwave assisted assistive appliances. Finally, this thesis comes to the end by drawing the future direction regarding the current issues of BCI technology and these directions may significantly enhance usability for the implementation of commercial applications not only for the disabled but also for a significant number of healthy users

    A hybrid environment control system combining EMG and SSVEP signal based on brain-computer interface technology

    Get PDF
    The patients who are impaired with neurodegenerative disorders cannot command their muscles through the neural pathways. These patients are given an alternative from their neural path through Brain-Computer Interface (BCI) systems, which are the explicit use of brain impulses without any need for a computer's vocal muscle. Nowadays, the steady-state visual evoked potential (SSVEP) modality offers a robust communication pathway to introduce a non-invasive BCI. There are some crucial constituents, including window length of SSVEP response, the number of electrodes in the acquisition device and system accuracy, which are the critical performance components in any BCI system based on SSVEP signal. In this study, a real-time hybrid BCI system consists of SSVEP and EMG has been proposed for the environmental control system. The feature in terms of the common spatial pattern (CSP) has been extracted from four classes of SSVEP response, and extracted feature has been classified using K-nearest neighbors (k-NN) based classification algorithm. The obtained classification accuracy of eight participants was 97.41%. Finally, a control mechanism that aims to apply for the environmental control system has also been developed. The proposed system can identify 18 commands (i.e., 16 control commands using SSVEP and two commands using EMG). This result represents very encouraging performance to handle real-time SSVEP based BCI system consists of a small number of electrodes. The proposed framework can offer a convenient user interface and a reliable control method for realistic BCI technology

    A brain-machine interface for assistive robotic control

    Get PDF
    Brain-machine interfaces (BMIs) are the only currently viable means of communication for many individuals suffering from locked-in syndrome (LIS) – profound paralysis that results in severely limited or total loss of voluntary motor control. By inferring user intent from task-modulated neurological signals and then translating those intentions into actions, BMIs can enable LIS patients increased autonomy. Significant effort has been devoted to developing BMIs over the last three decades, but only recently have the combined advances in hardware, software, and methodology provided a setting to realize the translation of this research from the lab into practical, real-world applications. Non-invasive methods, such as those based on the electroencephalogram (EEG), offer the only feasible solution for practical use at the moment, but suffer from limited communication rates and susceptibility to environmental noise. Maximization of the efficacy of each decoded intention, therefore, is critical. This thesis addresses the challenge of implementing a BMI intended for practical use with a focus on an autonomous assistive robot application. First an adaptive EEG- based BMI strategy is developed that relies upon code-modulated visual evoked potentials (c-VEPs) to infer user intent. As voluntary gaze control is typically not available to LIS patients, c-VEP decoding methods under both gaze-dependent and gaze- independent scenarios are explored. Adaptive decoding strategies in both offline and online task conditions are evaluated, and a novel approach to assess ongoing online BMI performance is introduced. Next, an adaptive neural network-based system for assistive robot control is presented that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. Exploratory learning, or “learning by doing,” is an unsupervised method in which the robot is able to build an internal model for motor planning and coordination based on real-time sensory inputs received during exploration. Finally, a software platform intended for practical BMI application use is developed and evaluated. Using online c-VEP methods, users control a simple 2D cursor control game, a basic augmentative and alternative communication tool, and an assistive robot, both manually and via high-level goal-oriented commands

    Defining brain–machine interface applications by matching interface performance with device requirements

    Get PDF
    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications. © 2007 Elsevier B.V. All rights reserved
    corecore