26 research outputs found
5-fold cross validation accuracies on the training set.
The accuracies are obtained using RBF SVM (with various sigma values), on the training portion of the ADHD-200 dataset using functional images plus personal characteristic data. This figure is best viewed in color.</p
HOG bins in 2D and 3D space.
<p>Left and right panels show HOG bins in 2D and 3D space, respectively.</p
ADHD-200, personal characteristic data with structural images.
ADHD-200, personal characteristic data with structural images.</p
Summary of ABIDE dataset classification results.
<p>Conventions are the same as for <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0166934#pone.0166934.g007" target="_blank">Fig 7</a>. This figure is best viewed in color.</p
Summary of ADHD-200 dataset classification results.
<p>The black horizontal dotted line shows the baseline chance accuracy of the test set. Each vertical bar shows the mean and range of the cross validation results for the selected base learner (L) and feature set (FS*(L)) on the training set, as produced with MHPC (Algorithm 1). The blue asterisks * show the accuracy of each classifier on the hold-out set. The classifiers on the x-axis are ordered by the types of features they used, including various combinations of structural MRI, functional MRI, and personal characteristic data. The legend also identifies the actual classifier used. This figure is best viewed in color.</p
Summary of the learning pipeline.
<p>1) Each image in the datasets is preprocessed (see section Preprocessing and <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0166934#pone.0166934.g002" target="_blank">Fig 2</a>), reducing the dimensions from about 100,000,000 (79 × 95 × 68 × 200) to about 500,000. 2) The MHPC system then extracts the 3D-HOG features of each image reducing the number of dimensions to about 100,000; see section Histogram of oriented gradients (HOG) features. 3) The last step tries to select the best learner (from the initial set of base learners) and feature set, based on running 5-fold cross validation over the training set, using different combinations of the number of features and base learners. This step reduces the number of dimensions to a number under 1000; see section Results. HOG feature extraction, minimum redundancy maximum relevance (MRMR) feature selection and base learner selection are all parts of the MHPC algorithm (shown in the red box above). See Algorithm 1 for details. This figure is best viewed in color.</p