149 research outputs found
Cross-cohort generalizability of deep and conventional machine learning for MRI-based diagnosis and prediction of Alzheimer's disease
This work validates the generalizability of MRI-based classification of Alzheimer's disease (AD) patients and controls (CN) to an external data set and to the task of prediction of conversion to AD in individuals with mild cognitive impairment (MCI). We used a conventional support vector machine (SVM) and a deep convolutional neural network (CNN) approach based on structural MRI scans that underwent either minimal pre-processing or more extensive pre-processing into modulated gray matter (GM) maps. Classifiers were optimized and evaluated using cross-validation in the Alzheimer's Disease Neuroimaging Initiative (ADNI; 334 AD, 520 CN). Trained classifiers were subsequently applied to predict conversion to AD in ADNI MCI patients (231 converters, 628 non-converters) and in the independent Health-RI Parelsnoer Neurodegenerative Diseases Biobank data set. From this multi-center study representing a tertiary memory clinic population, we included 199 AD patients, 139 participants with subjective cognitive decline, 48 MCI patients converting to dementia, and 91 MCI patients who did not convert to dementia. AD-CN classification based on modulated GM maps resulted in a similar area-under-the-curve (AUC) for SVM (0.940; 95%CI: 0.924â0.955) and CNN (0.933; 95%CI: 0.918â0.948). Application to conversion prediction in MCI yielded significantly higher performance for SVM (AUC = 0.756; 95%CI: 0.720-0.788) than for CNN (AUC = 0.742; 95%CI: 0.709-0.776) (p<0.01 for McNemar's test). In external validation, performance was slightly decreased. For AD-CN, it again gave similar AUCs for SVM (0.896; 95%CI: 0.855â0.932) and CNN (0.876; 95%CI: 0.836â0.913). For prediction in MCI, performances decreased for both SVM (AUC = 0.665; 95%CI: 0.576-0.760) and CNN (AUC = 0.702; 95%CI: 0.624-0.786). Both with SVM and CNN, classification based on modulated GM maps significantly outperformed classification based on minimally processed images (p=0.01). Deep and conventional classifiers performed equally well for AD classification and their performance decreased only slightly when applied to the external cohort. We expect that this work on external validation contributes towards translation of machine learning to clinical practice
Bayesian inference and role of astrocytes in amyloid-beta dynamics with modelling of Alzheimer's disease using clinical data
Alzheimer's disease (AD) is a prominent, worldwide, age-related
neurodegenerative disease that currently has no systemic treatment. Strong
evidence suggests that permeable amyloid-beta peptide (Abeta) oligomers,
astrogliosis and reactive astrocytosis cause neuronal damage in AD. A large
amount of Abeta is secreted by astrocytes, which contributes to the total Abeta
deposition in the brain. This suggests that astrocytes may also play a role in
AD, leading to increased attention to their dynamics and associated mechanisms.
Therefore, in the present study, we developed and evaluated novel stochastic
models for Abeta growth using ADNI data to predict the effect of astrocytes on
AD progression in a clinical trial. In the AD case, accurate prediction is
required for a successful clinical treatment plan. Given that AD studies are
observational in nature and involve routine patient visits, stochastic models
provide a suitable framework for modelling AD. Using the approximate Bayesian
computation (ABC) approach, the AD etiology may be modelled as a multi-state
disease process. As a result, we use this approach to examine the weak and
strong influence of astrocytes at multiple disease progression stages using
ADNI data from the baseline to 2-year visits for AD patients whose ages ranged
from 50 to 90 years. Based on ADNI data, we discovered that the strong
astrocyte effect (i.e., a higher concentration of astrocytes as compared to
Abeta) could help to lower or clear the growth of Abeta, which is a key to
slowing down AD progression.Comment: 10, figures and 30 page
Tissue-Based MRI Intensity Standardization: Application to Multicentric Datasets
Intensity standardization in MRI aims at correcting scanner-dependent intensity variations. Existing simple and robust techniques aim at matching the input image histogram onto a standard, while we think that standardization should aim at matching spatially corresponding tissue intensities. In this study, we present a novel automatic technique, called STI for STandardization of Intensities, which not only shares the simplicity and robustness of histogram-matching techniques, but also incorporates tissue spatial intensity information. STI uses joint intensity histograms to determine intensity correspondence in each tissue between the input and standard images. We compared STI to an existing histogram-matching technique on two multicentric datasets, Pilot E-ADNI and ADNI, by measuring the intensity error with respect to the standard image after performing nonlinear registration. The Pilot E-ADNI dataset consisted in 3 subjects each scanned in 7 different sites. The ADNI dataset consisted in 795 subjects scanned in more than 50 different sites. STI was superior to the histogram-matching technique, showing significantly better intensity matching for the brain white matter with respect to the standard image
The effect of network template from normal subjects in the detection of network impairment
This study aimed to provide a simple way to approach group differences by independent component analysis when researching functional connectivity changes of restingâstate network in brain disorders. We used baseline resting state functional magnetic resonance imaging from the Alzheimer's disease neuroimaging initiative dataset and performed independent component analysis based on different kinds of subject selection, by including two downloaded templates and singleâsubject independent component analysis method. All conditions were used to calculate the functional connectivity of the default mode network, and to test group differences and evaluate correlation with cognitive measurements and hippocampal volume. The default mode network functional connectivity results most fitting clinical evaluations were from templates based on young healthy subjects and the worst results were from heterogeneous or more severe disease groups or singleâsubject independent component analysis method. Using independent component analysis network maps derived from normal young subjects to extract all individual functional connectivities provides significant correlations with clinical evaluations
MRI-based Multi-task Decoupling Learning for Alzheimer's Disease Detection and MMSE Score Prediction: A Multi-site Validation
Accurately detecting Alzheimer's disease (AD) and predicting mini-mental
state examination (MMSE) score are important tasks in elderly health by
magnetic resonance imaging (MRI). Most of the previous methods on these two
tasks are based on single-task learning and rarely consider the correlation
between them. Since the MMSE score, which is an important basis for AD
diagnosis, can also reflect the progress of cognitive impairment, some studies
have begun to apply multi-task learning methods to these two tasks. However,
how to exploit feature correlation remains a challenging problem for these
methods. To comprehensively address this challenge, we propose a MRI-based
multi-task decoupled learning method for AD detection and MMSE score
prediction. First, a multi-task learning network is proposed to implement AD
detection and MMSE score prediction, which exploits feature correlation by
adding three multi-task interaction layers between the backbones of the two
tasks. Each multi-task interaction layer contains two feature decoupling
modules and one feature interaction module. Furthermore, to enhance the
generalization between tasks of the features selected by the feature decoupling
module, we propose the feature consistency loss constrained feature decoupling
module. Finally, in order to exploit the specific distribution information of
MMSE score in different groups, a distribution loss is proposed to further
enhance the model performance. We evaluate our proposed method on multi-site
datasets. Experimental results show that our proposed multi-task decoupled
representation learning method achieves good performance, outperforming
single-task learning and other existing state-of-the-art methods.Comment: 15 page
A Weighted Prognostic Covariate Adjustment Method for Efficient and Powerful Treatment Effect Inferences in Randomized Controlled Trials
A crucial task for a randomized controlled trial (RCT) is to specify a
statistical method that can yield an efficient estimator and powerful test for
the treatment effect. A novel and effective strategy to obtain efficient and
powerful treatment effect inferences is to incorporate predictions from
generative artificial intelligence (AI) algorithms into covariate adjustment
for the regression analysis of a RCT. Training a generative AI algorithm on
historical control data enables one to construct a digital twin generator (DTG)
for RCT participants, which utilizes a participant's baseline covariates to
generate a probability distribution for their potential control outcome.
Summaries of the probability distribution from the DTG are highly predictive of
the trial outcome, and adjusting for these features via regression can thus
improve the quality of treatment effect inferences, while satisfying regulatory
guidelines on statistical analyses, for a RCT. However, a critical assumption
in this strategy is homoskedasticity, or constant variance of the outcome
conditional on the covariates. In the case of heteroskedasticity, existing
covariate adjustment methods yield inefficient estimators and underpowered
tests. We propose to address heteroskedasticity via a weighted prognostic
covariate adjustment methodology (Weighted PROCOVA) that adjusts for both the
mean and variance of the regression model using information obtained from the
DTG. We prove that our method yields unbiased treatment effect estimators, and
demonstrate via comprehensive simulation studies and case studies from
Alzheimer's disease that it can reduce the variance of the treatment effect
estimator, maintain the Type I error rate, and increase the power of the test
for the treatment effect from 80% to 85%~90% when the variances from the DTG
can explain 5%~10% of the variation in the RCT participants' outcomes.Comment: 49 pages, 6 figures, 12 table
- âŚ