4 research outputs found

    Wavelets and ensemble of FLDs for P300 classification

    No full text
    Over the last few years various P300 classification algorithms have been assessed using the P300 data provided by the Wadsworth center for brain-computer interface (BCI) competitions II and III. In this paper a novel method of P300 classification is presented and compared to the state of the art results obtained for BCI competition II data set Hb and BCI competition III data set II. The novel classification method includes discrete-wavelet transform (DWT) preprocessing and an ensemble of Fisher's Linear Discriminants for classification. The performance of the proposed method is as good as the state of the art method for the BCI competition II data set and only slightly worse than the state of the art method for BCI competition III data sets. Furthermore the proposed method is far less computationally expensive than the current state of the art method and could be modified for adaptive behavior in an online system

    Visual modifications on the P300 speller BCI paradigm

    No full text
    The best known P300 speller brain-computer interface (BCI) paradigm is the Farwell and Donchin paradigm. In this paper, various changes to the visual aspects of this protocol are explored as well as their effects on classification. Changes to the dimensions of the symbols, the distance between the symbols and the colours used were tested. The purpose of the present work was not to achieve the highest possible accuracy results, but to ascertain whether these simple modifications to the visual protocol will provide classification differences between them and what these differences will be. Eight subjects were used, with each subject carrying out a total of six different experiments. In each experiment, the user spelt a total of 39 characters. Two types of classifiers were trained and tested to determine whether the results were classifier dependant. These were a support vector machine (SVM) with a radial basis function (RBF) kernel and Fisher's linear discriminant (FLD). The single-trial classification results and multiple-trial classification results were recorded and compared. Although no visual protocol was the best for all subjects, the best performances, across both classifiers, were obtained with the white background (WB) visual protocol. The worst performance was obtained with the small symbol size (SSS) visual protocol. © 2009 IOP Publishing Ltd

    Classification effects of real and imaginary movement selective attention tasks on a P300-based brain-computer interface

    No full text
    Brain-computer interfaces (BCIs) rely on various electroencephalography methodologies that allow the user to convey their desired control to the machine. Common approaches include the use of event-related potentials (ERPs) such as the P300 and modulation of the beta and mu rhythms. All of these methods have their benefits and drawbacks. In this paper, three different selective attention tasks were tested in conjunction with a P300-based protocol (i.e. the standard counting of target stimuli as well as the conduction of real and imaginary movements in sync with the target stimuli). The three tasks were performed by a total of 10 participants, with the majority (7 out of 10) of the participants having never before participated in imaginary movement BCI experiments. Channels and methods used were optimized for the P300 ERP and no sensory-motor rhythms were explicitly used. The classifier used was a simple Fisher's linear discriminant. Results were encouraging, showing that on average the imaginary movement achieved a P300 versus No-P300 classification accuracy of 84.53%. In comparison, mental counting, the standard selective attention task used in previous studies, achieved 78.9% and real movement 90.3%. Furthermore, multiple trial classification results were recorded and compared, with real movement reaching 99.5% accuracy after four trials (12.8 s), imaginary movement reaching 99.5% accuracy after five trials (16 s) and counting reaching 98.2% accuracy after ten trials (32 s). © 2010 IOP Publishing Ltd
    corecore