2,060 research outputs found

    Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment

    Get PDF
    Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI

    Deep Learning for Multiclass Classification, Predictive Modeling and Segmentation of Disease Prone Regions in Alzheimer’s Disease

    Get PDF
    One of the challenges facing accurate diagnosis and prognosis of Alzheimer’s Disease (AD) is identifying the subtle changes that define the early onset of the disease. This dissertation investigates three of the main challenges confronted when such subtle changes are to be identified in the most meaningful way. These are (1) the missing data challenge, (2) longitudinal modeling of disease progression, and (3) the segmentation and volumetric calculation of disease-prone brain areas in medical images. The scarcity of sufficient data compounded by the missing data challenge in many longitudinal samples exacerbates the problem as we seek statistical meaningfulness in multiclass classification and regression analysis. Although there are many participants in the AD Neuroimaging Initiative (ADNI) study, many of the observations have a lot of missing features which often lead to the exclusion of potentially valuable data points that could add significant meaning in many ongoing experiments. Motivated by the necessity of examining all participants, even those with missing tests or imaging modalities, multiple techniques of handling missing data in this domain have been explored. Specific attention was drawn to the Gradient Boosting (GB) algorithm which has an inherent capability of addressing missing values. Prior to applying state-of-the-art classifiers such as Support Vector Machine (SVM) and Random Forest (RF), the impact of imputing data in common datasets with numerical techniques has been also investigated and compared with the GB algorithm. Furthermore, to discriminate AD subjects from healthy control individuals, and Mild Cognitive Impairment (MCI), longitudinal multimodal heterogeneous data was modeled using recurring neural networks (RNNs). In the segmentation and volumetric calculation challenge, this dissertation places its focus on one of the most relevant disease-prone areas in many neurological and neurodegenerative diseases, the hippocampus region. Changes in hippocampus shape and volume are considered significant biomarkers for AD diagnosis and prognosis. Thus, a two-stage model based on integrating the Vision Transformer and Convolutional Neural Network (CNN) is developed to automatically locate, segment, and estimate the hippocampus volume from the brain 3D MRI. The proposed architecture was trained and tested on a dataset containing 195 brain MRIs from the 2019 Medical Segmentation Decathlon Challenge against the manually segmented regions provided therein and was deployed on 326 MRI from our own data collected through Mount Sinai Medical Center as part of the 1Florida Alzheimer Disease Research Center (ADRC)

    Predicting Future Clinical Changes of MCI Patients Using Longitudinal and Multimodal Biomarkers

    Get PDF
    Accurate prediction of clinical changes of mild cognitive impairment (MCI) patients, including both qualitative change (i.e., conversion to Alzheimer's disease (AD)) and quantitative change (i.e., cognitive scores) at future time points, is important for early diagnosis of AD and for monitoring the disease progression. In this paper, we propose to predict future clinical changes of MCI patients by using both baseline and longitudinal multimodality data. To do this, we first develop a longitudinal feature selection method to jointly select brain regions across multiple time points for each modality. Specifically, for each time point, we train a sparse linear regression model by using the imaging data and the corresponding clinical scores, with an extra ‘group regularization’ to group the weights corresponding to the same brain region across multiple time points together and to allow for selection of brain regions based on the strength of multiple time points jointly. Then, to further reflect the longitudinal changes on the selected brain regions, we extract a set of longitudinal features from the original baseline and longitudinal data. Finally, we combine all features on the selected brain regions, from different modalities, for prediction by using our previously proposed multi-kernel SVM. We validate our method on 88 ADNI MCI subjects, with both MRI and FDG-PET data and the corresponding clinical scores (i.e., MMSE and ADAS-Cog) at 5 different time points. We first predict the clinical scores (MMSE and ADAS-Cog) at 24-month by using the multimodality data at previous time points, and then predict the conversion of MCI to AD by using the multimodality data at time points which are at least 6-month ahead of the conversion. The results on both sets of experiments show that our proposed method can achieve better performance in predicting future clinical changes of MCI patients than the conventional methods

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie
    • …
    corecore