24 research outputs found

    Interpretable High-Dimensional Inference Via Score Projection With an Application in Neuroimaging

    No full text
    <p>In the fields of neuroimaging and genetics, a key goal is testing the association of a single outcome with a very high-dimensional imaging or genetic variable. Often, summary measures of the high-dimensional variable are created to sequentially test and localize the association with the outcome. In some cases, the associations between the outcome and summary measures are significant, but subsequent tests used to localize differences are underpowered and do not identify regions associated with the outcome. Here, we propose a generalization of Rao’s score test based on projecting the score statistic onto a linear subspace of a high-dimensional parameter space. The approach provides a way to localize signal in the high-dimensional space by projecting the scores to the subspace where the score test was performed. This allows for inference in the high-dimensional space to be performed on the same degrees of freedom as the score test, effectively reducing the number of comparisons. Simulation results demonstrate the test has competitive power relative to others commonly used. We illustrate the method by analyzing a subset of the Alzheimer’s Disease Neuroimaging Initiative dataset. Results suggest cortical thinning of the frontal and temporal lobes may be a useful biological marker of Alzheimer’s disease risk. Supplementary materials for this article are available online.</p

    A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    No full text
    <div><p>Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.</p></div

    The scaled partial Area Under the Curve (pAUC) for each algorithm on each feature vector.

    No full text
    <p>The differences in scaled pAUC comes more from differences in feature vectors than differences in classification algorithms. The scaled pAUC of the simpler classification algorithms in the developed feature vectors are larger than that of the more complex classifiers on the original features in the unnormalized feature vector.</p

    The Dice similarity coefficient (DSC) for all pairs of classification algorithm segmentations and manual segmentations.

    No full text
    <p>The binary segmentations for each classification algorithm are at a threshold of false positive rate  = 0.5% in the validation set. A plot is presented for each of the six feature vectors: (A) unnormalized, (B) normalized, (C) voxel selection, (D) smoothed, (E) moments, and (F) smoothed and moments. On the developed feature vectors, the class labels assigned to the voxels for each algorithm are similar. This shows that not only are the overall predictive performances of the methods similar on these vectors, but the resulting segmentations from each method are also similar.</p

    The impact of downsampling the training set on computational time and classification performance.

    No full text
    <p>Time in hours to fit the algorithm (left column) and scaled pAUC for false positive rates up to 10% (right column) versus the number of voxels the algorithm is fit on for the unnormalized (A,B) and smoothed and moments feature vectors (C,D). Here we see the effectiveness of downsampling the training set as the performance of the algorithms is not impacted and the computational time is significantly lowered.</p

    The super learner coefficient versus the number of voxels the algorithm is fit on for the (A) unnormalized and the (B) smoothed and moments feature vectors.

    No full text
    <p>As the number of voxels used to fit the algorithm changes, the super learner consistently assigns large weights to the same small number of algorithms. For the unnormalized feature vector, high coefficient weights are selected for the logistic regression, one of the random forest tuning parameters, and the Gaussian mixture model. On the smoothed and moments feature vector, the super learner favors the less complex algorithms: logistic regression, the quadratic discriminant analysis, and the linear discriminant analysis. Some weight is also assigned to the Gaussian mixture model and the random forest.</p

    Figure 6

    No full text
    <p>(A) The time in hours required to fit the algorithm on each feature vector and (B) the time in minutes required to make a prediction for a single MRI study from the fitted algorithms. Both of the bar plots are partitioned into the six feature vectors on the horizontal axis. The simpler algorithms without tuning parameters require significantly less computational time than more complex methods.</p

    A summary of the training set, training set after the voxel selection procedure has been applied, and the validation set.

    No full text
    <p>Subjects were randomly assigned to the training or validation set. All training, including tuning of algorithm parameters with 10-fold cross validation, was performed on the training set.</p

    Scatter plots of the T1-w, T2-w and FLAIR voxel intensities and functions of these intensities for 10,000 randomly sampled voxels from 5 randomly sampled subject's MRI studies.

    No full text
    <p>Each point in the plot represents a single voxel from a study. (A–C) Color key for these plots: (A) the FLAIR volume for an axial slice from a single subject's MRI study, (B) the technician's manual segmentation for this slice and (C) the colors that are used in the plots corresponding to this slice. Lesion voxels are pink, voxels within 1 mm of a lesion voxel are orange, voxels within 2 mm of a lesion voxel are blue and all other voxels in the brain are colored grey. The arrows in the figure indicate the order that the features are created. For the unnormalized intensities there is no plane that can separate lesion voxels from non-lesion voxels, but after normalization and with the addition of features that include neighborhood information, a plane is able to separate lesion and non-lesion voxels with improved accuracy.</p
    corecore