13 research outputs found

    Recorded data epochs.

    No full text
    <p>The total number of data epochs recorded for each subject that were used as artificial neural network training and test inputs for left and right hand-squeeze and no-movement signal classes</p><p>Recorded data epochs.</p

    Classification performance of an artificial neural network (ANN) using backpropagation (BP) and simulated annealing augmented backpropagation (SA).

    No full text
    <p>Two randomly generated one-, two- or three-layer ANNs were created. Both ANNs had the same number of hidden layers and neurons. Each ANN was then trained using either BP or SA and tested on the same unseen test set. This process was repeated 50 times. The bar represents the mean Cohen’s kappa score for each group and the error bars are the standard deviation of the 50 kappa scores. The outcome for chance performance is shown as a red dashed line (kappa score of zero). The asterisks indicate the confidence level with which to reject the hypothesis that the two bars are from the same distribution using a two-tailed <i>t</i>-test (** for <i>p</i> < 0.01). For exact <i>p</i>-values see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0131328#pone.0131328.s003" target="_blank">S2 Appendex</a>. Datasets are from the BCI II competition, dataset 4 and the BCI IV competition, dataset 3. The results for the two- and three-class hand-squeeze datasets are taken as the average values across five participants.</p

    Results for BCI IV data set 3.

    No full text
    <p>The classification accuracy for the current study is compared to previous work by (1) Hajipour et al. (2) Li et al. (3) Montazeri et al. (4) Want et al. The red dashed line represents chance outcome (25%). White dashed lines indicate minor gridlines.</p

    Temporal filtering of artificial neural networks (ANNs).

    No full text
    <p>Subplots are the estimated power spectral densities (PSDs) from the periodogram of the temporal weights. Temporal weights are taken between the input layer and the first hidden layer averaged over space (channels) and across the neurons in the first hidden layer to obtain a single value for each point in time. The initial weight values were obtained from the EEG epoch corresponding to each channel, with the DC component removed. The solid line represents the mean PSD over five folds of cross validation. The shaded region represents the distance between the minimum and maximum obtained values. <b>A</b> Subject A. <b>B</b> Subject B. <b>C</b> Subject C. <b>D</b> Subject D. <b>E</b> Subject E.</p

    Comparison of accuracy between hand-squeeze detection and left vs. right detection of three-class hand squeeze classifier.

    No full text
    <p>The first bar for each participant is the accuracy for correct classification of a hand squeeze (left or right), given that a hand squeeze had occurred. The second bar for each participant is the accuracy for detection of a hand squeeze regardless of laterality. The bars show the mean accuracy across the resulting confusion matrices after five-fold cross validation for each participant. The error bars are the standard deviation over the five confusion matrices. The asterisks represent the confidence level to reject the null hypothesis that the classification accuracy once a hand squeeze is detected is significantly different from the accuracy of detecting if a hand squeeze has occurred using a two-tailed <i>t</i>-test (** for <i>p</i> < 0.01). White dashed lines indicate minor gridlines. For the exact <i>p</i>-values see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0131328#pone.0131328.s003" target="_blank">S2 Appendix</a>. For the full confusion matrices, see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0131328#pone.0131328.s002" target="_blank">S1 Appendix</a>.</p

    Algorithm parameters for artificial neural network (ANN) and genetic algorithm (GA).

    No full text
    <p>The parameters used to initialize and run the algorithm for the competition data sets and hand-squeeze task. (Note that SA refers to the simulated annealing augmented backpropagation used for ANN training.)</p><p>Algorithm parameters for artificial neural network (ANN) and genetic algorithm (GA).</p

    Number of layers of the artificial neural networks (ANNs).

    No full text
    <p>The total number of layers in the final ANN classifier after each fold of cross-validation for every participant. A dot is placed in the relevant row for each classifier, where the number of hidden layers is two less than the total number of layers, since there is always an input and output layer. <b>A</b> Results for two-class dataset. <b>B</b> Results for three-class dataset. The networks trained on the three-class dataset have a higher median number of layers than the networks trained on two-class data (Wilcoxon rank sum test, <i>p</i> = 0.0094).</p

    Inter-subject performance of two-class classifier.

    No full text
    <p>The classification performance (reported as the Cohen’s kappa score) of the artificial neural network (ANN) from Participant A is compared with test data from Participant B and vice versa. The bar plot shows the mean kappa value across the five-fold cross validation and the error bars are the standard deviation. The asterisks represent the confidence level to reject the null hypothesis that the results for the subject’s own dataset and the alternative dataset are not significantly different using a two-tailed <i>t</i>-test (** for <i>p</i> < 0.01).</p

    Illustration of method.

    No full text
    <p>Signals acquired from the brain-computer interface (BCI) user are initially used to train an artificial neural network (ANN). The number of hidden layers and neurons is determined using a genetic algorithm (GA). At the termination of the GA, the network found to have the fittest structure is used in the BCI. The ANN is a fully-interconnected multi-layer perceptron. The input layer consists of every time-point of each channel. Hence, each neuron in the first hidden layer is able to generate features based on both spatial and temporal inferences. The hidden layers then feed into the output neurons, which determine the classifier output.</p

    Results for BCI II data set 4.

    No full text
    <p>The classification accuracy for the current study is compared to previous work by (1) Zhang et al. (2) Neal (3) Hoffmann (4) Huang et al. (5) Mensh (6) Brugger et al. Only the top 6 competition entrants are shown. The red dashed line represents chance outcome (50%). White dashed lines indicate minor gridlines.</p
    corecore