25 research outputs found

    Artificial intelligence for dementia research methods optimization

    Get PDF
    Artificial intelligence (AI) and machine learning (ML) approaches are increasingly being used in dementia research. However, several methodological challenges exist that may limit the insights we can obtain from high-dimensional data and our ability to translate these findings into improved patient outcomes. To improve reproducibility and replicability, researchers should make their well-documented code and modeling pipelines openly available. Data should also be shared where appropriate. To enhance the acceptability of models and AI-enabled systems to users, researchers should prioritize interpretable methods that provide insights into how decisions are generated. Models should be developed using multiple, diverse datasets to improve robustness, generalizability, and reduce potentially harmful bias. To improve clarity and reproducibility, researchers should adhere to reporting guidelines that are co-produced with multiple stakeholders. If these methodological challenges are overcome, AI and ML hold enormous promise for changing the landscape of dementia research and care

    Vascular cognitive impairment in the mouse reshapes visual, spatial network functional connectivity

    Get PDF
    Connectome analysis of neuroimaging data is a rapidly expanding field to identify disease specific biomarkers. Structural diffusion MRI connectivity has been useful in individuals with radiological features of small vessel disease, such as white matter hyperintensities. Global efficiency, a network metric calculated from the structural connectome, is an excellent predictor of cognitive decline. To dissect the biological underpinning of these changes, animal models are required. We tested whether the structural connectome is altered in a mouse model of vascular cognitive impairment. White matter damage was more pronounced by 6 compared to 3 months. Global efficiency remained intact, but the visual association cortex exhibited increased structural connectivity with other brain regions. Exploratory resting state functional MRI connectivity analysis revealed diminished default mode network activity in the model compared to shams. Further perturbations were observed in a primarily cortical hub and the retrosplenial and visual cortices, and the hippocampus were the most affected nodes. Behavioural deficits were observed in the cued water maze, supporting the suggestion that the visual and spatial memory networks are affected. We demonstrate specific circuitry is rendered vulnerable to vascular stress in the mouse, and the model will be useful to examine pathophysiological mechanisms of small vessel disease

    Artificial intelligence for diagnostic and prognostic neuroimaging in dementia: a systematic review

    Get PDF
    Introduction Artificial intelligence (AI) and neuroimaging offer new opportunities for diagnosis and prognosis of dementia. Methods We systematically reviewed studies reporting AI for neuroimaging in diagnosis and/or prognosis of cognitive neurodegenerative diseases. Results A total of 255 studies were identified. Most studies relied on the Alzheimer's Disease Neuroimaging Initiative dataset. Algorithmic classifiers were the most commonly used AI method (48%) and discriminative models performed best for differentiating Alzheimer's disease from controls. The accuracy of algorithms varied with the patient cohort, imaging modalities, and stratifiers used. Few studies performed validation in an independent cohort. Discussion The literature has several methodological limitations including lack of sufficient algorithm development descriptions and standard definitions. We make recommendations to improve model validation including addressing key clinical questions, providing sufficient description of AI methods and validating findings in independent datasets. Collaborative approaches between experts in AI and medicine will help achieve the promising potential of AI tools in practice. Highlights There has been a rapid expansion in the use of machine learning for diagnosis and prognosis in neurodegenerative disease Most studies (71%) relied on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset with no other individual dataset used more than five times There has been a recent rise in the use of more complex discriminative models (e.g., neural networks) that performed better than other classifiers for classification of AD vs healthy controls We make recommendations to address methodological considerations, addressing key clinical questions, and validation We also make recommendations for the field more broadly to standardize outcome measures, address gaps in the literature, and monitor sources of bia

    Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis

    Get PDF
    In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists’ gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for organizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed

    Spectral time-lapse (STL) Toolbox

    No full text
    <p>The spectral time-lapse (STL) algorithm is designed to be a simple and efficient technique for analyzing and presenting both spatial and temporal information of animal movements within a two-dimensional image representation. The STL algorithm re-codes an animal's position at every time point with a time-specific color, and overlaid it over a reference frame of the video, to produce a summary image. It additionally incorporates automated motion tracking, such that the animal's position can be extracted and summary statistics such as path length and duration can be calculated, as well as instantaneous velocity and acceleration. This toolbox implements the STL algorithm as a MATLAB toolbox and allows for a large degree of end-user control and flexibility.</p

    Bone loss from Wnt inhibition mitigated by concurrent alendronate therapy

    No full text
    10.1038/s41413-018-0017-8Bone Research611
    corecore