5 research outputs found
Dementia risk prediction in individuals with mild cognitive impairment: a comparison of Cox regression and machine learning models
Abstract
Background
Cox proportional hazards regression models and machine learning models are widely used for predicting the risk of dementia. Existing comparisons of these models have mostly been based on empirical datasets and have yielded mixed results. This study examines the accuracy of various machine learning and of the Cox regression models for predicting time-to-event outcomes using Monte Carlo simulation in people with mild cognitive impairment (MCI).
Methods
The predictive accuracy of nine time-to-event regression and machine learning models were investigated. These models include Cox regression, penalized Cox regression (with Ridge, LASSO, and elastic net penalties), survival trees, random survival forests, survival support vector machines, artificial neural networks, and extreme gradient boosting. Simulation data were generated using study design and data characteristics of a clinical registry and a large community-based registry of patients with MCI. The predictive performance of these models was evaluated based on three-fold cross-validation via Harrellâs concordance index (c-index), integrated calibration index (ICI), and integrated brier score (IBS).
Results
Cox regression and machine learning model had comparable predictive accuracy across three different performance metrics and data-analytic conditions. The estimated c-index values for Cox regression, random survival forests, and extreme gradient boosting were 0.70, 0.69 and 0.70, respectively, when the data were generated from a Cox regression model in a large sample-size conditions. In contrast, the estimated c-index values for these models were 0.64, 0.64, and 0.65 when the data were generated from a random survival forest in a large sample size conditions. Both Cox regression and random survival forest had the lowest ICI values (0.12 for a large sample size and 0.18 for a small sample size) among all the investigated models regardless of sample size and data generating model.
Conclusion
Cox regression models have comparable, and sometimes better predictive performance, than more complex machine learning models. We recommend that the choice among these models should be guided by important considerations for research hypotheses, model interpretability, and type of data
A pragmatic dementia risk score for patients with mild cognitive impairment in a memory clinic population: Development and validation of a dementia risk score using routinely collected data
Abstract Introduction This study aimed to develop and validate a 3âyear dementia risk score in individuals with mild cognitive impairment (MCI) based on variables collected in routine clinical care. Methods The prediction score was trained and developed using data from the National Alzheimer's Coordinating Center (NACC). Selection criteria included aged 55 years and older with MCI. Cox models were validated externally using two independent cohorts from the Prospective Registry of Persons with Memory Symptoms (PROMPT) registry and the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Results Our Mild Cognitive Impairment to Dementia Risk (CIDER) score predicted dementia risk with câindices of 0.69 (95% confidence interval [CI] 0.66â0.72), 0.61 (95% CI 0.59â0.63), and 0.72 (95% CI 0.69â0.75), for the internally validated and the external validation PROMPT, and ADNI cohorts, respectively. Discussion The CIDER score could be used to inform clinicians and patients about the relative probabilities of developing dementia in patients with MCI