23 research outputs found

    For all algorithms the separation of class probability distributions became clearer when semantic and phonological data were combined.

    No full text
    <p>normalized discriminative score distribution was evaluated over all 1000 test datasets produced during the nested cross validation. The discriminative score is given by the logarithmic odds ratio and is here normalized by the maximum value of all test data.</p

    Relative performance of gLASSO, sgLASSO and SVM, depended on the dataset, while sLASSO and Random Forest were generally outperformed by the other algorithms.

    No full text
    <p>(a) semantic verbal fluency, (b) phonological verbal fluency and (c) combined datasets. Classification performance was significantly different between all algorithms and significantly higher for each algorithm with the combined dataset (<i>p</i> < 0.001, u-test).</p

    In the combined evaluation, model stability is higher than in the separate evaluation.

    No full text
    <p>This is indicated by the selection frequency of the same brain areas during the cross validation. The frequency is generally lower for (a) separate evaluation than in the (b) combined evaluation. Only brain areas selected in more than 80% of all cross validated models are given. Numbers indicate the selection frequency. The colors indicate average values over all positive and negative weights (normalized by the highest average positive and negative value over all brain areas, resp.). Areas more highly activated in control subjects than in depressed subjects are in blue; areas more highly activated in depressed subjects than in control subjects are in red. For gLASSO, the sum of negatively and positively weighted voxels cover 100% of the brain area of interest, so that the selection frequency is the same.</p

    gLASSO constrains the number of groups of voxels (example combined dataset).

    No full text
    <p>This avoids the problem seen with sLASSO, where single voxels may erroneously suggest that brain areas containing those voxels distinguish between depressed and non-depressed patients. (a) For sLASSO, the number of non-zero weights is naturally sparse. Contributing voxels tend to be scattered, thus jeopardizing predictions in the case of even slight data distortion. (b) In gLASSO this is prevented by assigning voxels to groups. (c) sgLASSO further sparsifies contributing voxels within these groups, thus eliminating redundant weights, but preserving topological continuity. (d) The SVM algorithm uses all voxels to estimate the classification model (Plotted with mricron [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0123524#pone.0123524.ref030" target="_blank">30</a>]). Positive weights are displayed in red, indicating voxels typically more highly activated in depressed subjects than in control subjects. Negative weights are displayed in blue, indicating voxels typically more highly activated in control subjects. Only voxels selected in more than 80% of the 1000 models evaluated in the hundred 10-fold nested cross-validation are displayed. Contributions in the evaluation of the separated datasets were similar to that of the combined evaluation, but less stable with respect to selection frequency in the cross validation models (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0123524#pone.0123524.g008" target="_blank">Fig 8</a>).</p
    corecore