25 research outputs found
The division of feature samples using SWCV and 10-fold CV.
<p>The red rectangle denotes training set, whereas the green rectangle denotes testing set by the division of SWCV; Training set is further divided into sub-training set and sub-validation set by common 10-fold CV.</p
A Novel Algorithm to Enhance P300 in Single Trials: Application to Lie Detection Using F-Score and SVM
<div><p>The investigation of lie detection methods based on P300 potentials has drawn much interest in recent years. We presented a novel algorithm to enhance signal-to-noise ratio (SNR) of P300 and applied it in lie detection to increase the classification accuracy. Thirty-four subjects were divided randomly into guilty and innocent groups, and the EEG signals on 14 electrodes were recorded. A novel spatial denoising algorithm (SDA) was proposed to reconstruct the P300 with a high SNR based on independent component analysis. The differences between the proposed method and our/other early published methods mainly lie in the extraction and feature selection method of P300. Three groups of features were extracted from the denoised waves; then, the optimal features were selected by the F-score method. Selected feature samples were finally fed into three classical classifiers to make a performance comparison. The optimal parameter values in the SDA and the classifiers were tuned using a grid-searching training procedure with cross-validation. The support vector machine (SVM) approach was adopted to combine with an F-score because this approach had the best performance. The presented model F-score_SVM reaches a significantly higher classification accuracy for P300 (specificity of 96.05%) and non-P300 (sensitivity of 96.11%) compared with the results obtained without using SDA and compared with the results obtained by other classification models. Moreover, a higher individual diagnosis rate can be obtained compared with previous methods, and the presented method requires only a small number of stimuli in the real testing application.</p></div
The detection error rates of two groups of subjects.
<p>10A: The detection error rate of the guilty and innocent groups for BAD method. 10B: The detection error rate of the guilty and innocent groups for BCD method.</p
The sketch map of stimuli protocol.
<p>The left part and right parts of the dashed line represent the experimental protocol for guilty and innocent subjects, respectively. The pictures with red, blue and green rectangles represents P, T and I stimuli, respectively.</p
All data for 14 electrodes and 30 subjects
one can epoch each txt file using toolbox suchas EEGLA
The results of feature selection on original 29 features using F-score.
<p>The results of feature selection on original 29 features using F-score.</p
Coefficients of the truncated decomposition filters <i>h</i>, <i>g</i> (IIR) and reconstruction filters H, G (FIR) for quadratic spline filters.
<p>Coefficients of the truncated decomposition filters <i>h</i>, <i>g</i> (IIR) and reconstruction filters H, G (FIR) for quadratic spline filters.</p
Response waveforms and reconstructed waveforms on Pz after applying SDA for a guilty and an innocent subject.
<p>7A: Single trials (solid lines) and averaged waveform (dashed line) on Pz for a guilty subject before applying SDA. 7B: Single trials (solid lines) and averaged waveform (dashed line) on Pz for a guilty subject before applying SDA. 7C: Reconstructed waveforms (a P300 for the guilty subject and a non-P300 for the innocent subject) by applying SDA on the averaged datasets.</p
Sensitivity/specificity on the training and testing sets for different classification models with the optimal parameter combination.
<p>“▴” denotes that a p-value of <0.001 was obtained by ANOVA between F-score_FDA and F-score_SVM; “<b>*</b>” denotes that a p-value of <0.001 was obtained by ANOVA between F-score_BPNN and F-score_SVM; for BPNN, the number of hidden nodes = 5, and the learning rate  = 0.03; for SVM, radial =  32, and penalty parameter <i>C</i> = 2<sup>8</sup>.</p><p>Sensitivity/specificity on the training and testing sets for different classification models with the optimal parameter combination.</p