38 research outputs found

    Subject-independent P300 BCI using ensemble classifier, dynamic stopping and adaptive learning

    Full text link
    © 2017 IEEE. Brain-computer interfaces (BCIs) are used to assist people, especially those with verbal or physical disabilities, communicate with the computer to indicate their selections, control a device or answer questions only by their mere thoughts. Due to the noisy nature of brain signals, the required time for each experimental session must be lengthened to reach satisfactory accuracy. This is the trade-off between the speed and the precision of a BCI system. In this paper, we propose a unified method which is the integration of ensemble classifier, dynamic stopping, and adaptive learning. We are able to both increase the accuracy, as well as to reduce the spelling time of the P300-Speller. Another merit of our study is that it does not require the training phase for any new subject, hence eliminates the extensively time-consuming process for learning purposes. Experimental results show that we achieve the averaged bit rate boost up of 182% on 15 subjects. Our best achieved accuracy is 95.95% by using 7.49 flashing iterations and our best achieved bit rate is 40.87 bits/min with 83.99% accuracy and 3.64 iterations. To the best of our knowledge, these results outperformed most of the related P300-based BCI studies

    Subject-Independent ERP-Based Brain-Computer Interfaces

    Full text link
    © 2001-2011 IEEE. Brain-computer interfaces (BCIs) are desirable for people to express their thoughts, especially those with profound disabilities in communication. The classification of brain patterns for each different subject requires an extensively time-consuming learning stage specific to that person, in order to reach satisfactory accuracy performance. The training session could also be infeasible for disabled patients as they may not fully understand the training instructions. In this paper, we propose a unified classification scheme based on ensemble classifier, dynamic stopping, and adaptive learning. We apply this scheme on the P300-based BCI, with the subject-independent manner, where no learning session is required for new experimental users. According to our theoretical analysis and empirical results, the harmonized integration of these three methods can significantly boost up the average accuracy from 75.00% to 91.26%, while at the same time reduce the average spelling time from 12.62 to 6.78 iterations, approximately to two-fold faster. The experiments were conducted on a large public dataset which had been used in other related studies. Direct comparisons between our work with the others' are also reported in details

    Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review

    Get PDF
    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported

    Classification of Frequency and Phase Encoded Steady State Visual Evoked Potentials for Brain Computer Interface Speller Applications using Convolutional Neural Networks

    Get PDF
    Over the past decade there have been substantial improvements in vision based Brain-Computer Interface (BCI) spellers for quadriplegic patient populations. This thesis contains a review of the numerous bio-signals available to BCI researchers, as well as a brief chronology of foremost decoding methodologies used to date. Recent advances in classification accuracy and information transfer rate can be primarily attributed to time consuming patient specific parameter optimization procedures. The aim of the current study was to develop analysis software with potential ‘plug-in-and-play’ functionality. To this end, convolutional neural networks, presently established as state of the art analytical techniques for image processing, were utilized. The thesis herein defines deep convolutional neural network architecture for the offline classification of phase and frequency encoded SSVEP bio-signals. Networks were trained using an extensive 35 participant open source Electroencephalographic (EEG) benchmark dataset (Department of Bio-medical Engineering, Tsinghua University, Beijing). Average classification accuracies of 82.24% and information transfer rates of 22.22 bpm were achieved on a BCI naïve participant dataset for a 40 target alphanumeric display, in absence of any patient specific parameter optimization

    EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.

    Get PDF
    Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    A Brain Controlled Wheelchair to Navigate in Familiar Environments

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
    corecore