58 research outputs found

    MRI-based classification of IDH mutation and 1p/19q codeletion status of gliomas using a 2.5D hybrid multi-task convolutional neural network

    Get PDF
    Isocitrate dehydrogenase (IDH) mutation and 1p/19q codeletion status are important prognostic markers for glioma. Currently, they are determined using invasive procedures. Our goal was to develop artificial intelligence-based methods to non-invasively determine these molecular alterations from MRI. For this purpose, pre-operative MRI scans of 2648 patients with gliomas (grade II-IV) were collected from Washington University School of Medicine (WUSM; n = 835) and publicly available datasets viz. Brain Tumor Segmentation (BraTS; n = 378), LGG 1p/19q (n = 159), Ivy Glioblastoma Atlas Project (Ivy GAP; n = 41), The Cancer Genome Atlas (TCGA; n = 461), and the Erasmus Glioma Database (EGD; n = 774). A 2.5D hybrid convolutional neural network was proposed to simultaneously localize the tumor and classify its molecular status by leveraging imaging features from MR scans and prior knowledge features from clinical records and tumor location. The models were tested on one internal (TCGA) and two external (WUSM and EGD) test sets. For IDH, the best-performing model achieved areas under the receiver operating characteristic (AUROC) of 0.925, 0.874, 0.933 and areas under the precision-recall curves (AUPRC) of 0.899, 0.702, 0.853 on the internal, WUSM, and EGD test sets, respectively. For 1p/19q, the best model achieved AUROCs of 0.782, 0.754, 0.842, and AUPRCs of 0.588, 0.713, 0.782, on those three data-splits, respectively. The high accuracy of the model on unseen data showcases its generalization capabilities and suggests its potential to perform a 'virtual biopsy' for tailoring treatment planning and overall clinical management of gliomas

    β-amyloid PET harmonisation across longitudinal studies: Application to AIBL, ADNI and OASIS3

    Get PDF
    INTRODUCTION: The Centiloid scale was developed to harmonise the quantification of β-amyloid (Aβ) PET images across tracers, scanners, and processing pipelines. However, several groups have reported differences across tracers and scanners even after centiloid conversion. In this study, we aim to evaluate the impact of different pre and post-processing harmonisation steps on the robustness of longitudinal Centiloid data across three large international cohort studies. METHODS: All Aβ PET data in AIBL (N = 3315), ADNI (N = 3442) and OASIS3 (N = 1398) were quantified using the MRI-based Centiloid standard SPM pipeline and the PET-only pipeline CapAIBL. SUVR were converted into Centiloids using each tracer\u27s respective transform. Global Aβ burden from pre-defined target cortical regions in Centiloid units were quantified for both raw PET scans and PET scans smoothed to a uniform 8 mm full width half maximum (FWHM) effective smoothness. For Florbetapir, we assessed the performance of using both the standard Whole Cerebellum (WCb) and a composite white matter (WM)+WCb reference region. Additionally, our recently proposed quantification based on Non-negative Matrix Factorisation (NMF) was applied to all spatially and SUVR normalised images. Correlation with clinical severity measured by the Mini-Mental State Examination (MMSE) and effect size, as well as tracer agreement in RESULTS: The smoothing to a uniform resolution partially reduced longitudinal variability, but did not improve inter-tracer agreement, effect size or correlation with MMSE. Using a Composite reference region for CONCLUSIONS: FWHM smoothing has limited impact on longitudinal consistency or outliers. A Composite reference region including subcortical WM should be used for computing both cross-sectional and longitudinal Florbetapir Centiloid. NMF improves Centiloid quantification on all metrics examined

    Backward masked fearful faces enhance contralateral occipital cortical activity for visual targets within the spotlight of attention

    Get PDF
    Spatial attention has been argued to be adaptive by enhancing the processing of visual stimuli within the ‘spotlight of attention’. We previously reported that crude threat cues (backward masked fearful faces) facilitate spatial attention through a network of brain regions consisting of the amygdala, anterior cingulate and contralateral visual cortex. However, results from previous functional magnetic resonance imaging (fMRI) dot-probe studies have been inconclusive regarding a fearful face-elicited contralateral modulation of visual targets. Here, we tested the hypothesis that the capture of spatial attention by crude threat cues would facilitate processing of subsequently presented visual stimuli within the masked fearful face-elicited ‘spotlight of attention’ in the contralateral visual cortex. Participants performed a backward masked fearful face dot-probe task while brain activity was measured with fMRI. Masked fearful face left visual field trials enhanced activity for spatially congruent targets in the right superior occipital gyrus, fusiform gyrus and lateral occipital complex, while masked fearful face right visual field trials enhanced activity in the left middle occipital gyrus. These data indicate that crude threat elicited spatial attention enhances the processing of subsequent visual stimuli in contralateral occipital cortex, which may occur by lowering neural activation thresholds in this retinotopic location

    A feasibility study to evaluate early treatment response of brain metastases one week after stereotactic radiosurgery using perfusion weighted imaging

    Get PDF
    BACKGROUND: To explore if early perfusion-weighted magnetic resonance imaging (PWI) may be a promising imaging biomarker to predict local recurrence (LR) of brain metastases after stereotactic radiosurgery (SRS). METHODS: This is a prospective pilot study of adult brain metastasis patients who were treated with SRS and imaged with PWI before and 1 week later. Relative cerebral blood volume (rCBV) parameter maps were calculated by normalizing to the mean value of the contralateral white matter on PWI. Cox regression was conducted to explore factors associated with time to LR, with Bonferroni adjusted p\u3c0.0006 for multiple testing correction. LR rates were estimated with the Kaplan-Meier method and compared using the log-rank test. RESULTS: Twenty-three patients were enrolled from 2013 through 2016, with 22 evaluable lesions from 16 patients. After a median follow-up of 13.1 months (range: 3.0-53.7), 5 lesions (21%) developed LR after a median of 3.4 months (range: 2.3-5.7). On univariable analysis, larger tumor volume (HR 1.48, 95% CI 1.02-2.15, p = 0.04), lower SRS dose (HR 0.45, 95% CI 0.21-0.97, p = 0.04), and higher rCBV at week 1 (HR 1.07, 95% CI 1.003-1.14, p = 0.04) had borderline association with shorter time to LR. Tumors \u3e2.0cm3 had significantly higher LR than if ≤2.0cm3: 54% vs 0% at 1 year, respectively, p = 0.008. A future study to confirm the association of early PWI and LR of the high-risk cohort of lesions \u3e2.0cm3 is estimated to require 258 patients. CONCLUSIONS: PWI at week 1 after SRS may have borderline association with LR. Tumors \u3c2.0cm3 have low risk of LR after SRS and may be low-yield for predictive biomarker studies. Information regarding sample size and potential challenges for future imaging biomarker studies may be gleaned from this pilot study

    Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-oncology (I3CR-WANO)

    Full text link
    Efforts to utilize growing volumes of clinical imaging data to generate tumor evaluations continue to require significant manual data wrangling owing to the data heterogeneity. Here, we propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-oncology MRI data to extract quantitative tumor measurements. Our end-to-end framework i) classifies MRI sequences using an ensemble classifier, ii) preprocesses the data in a reproducible manner, iii) delineates tumor tissue subtypes using convolutional neural networks, and iv) extracts diverse radiomic features. Moreover, it is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists. Following the implementation of the framework in Docker containers, it was applied to two retrospective glioma datasets collected from the Washington University School of Medicine (WUSM; n = 384) and the M.D. Anderson Cancer Center (MDA; n = 30) comprising preoperative MRI scans from patients with pathologically confirmed gliomas. The scan-type classifier yielded an accuracy of over 99%, correctly identifying sequences from 380/384 and 30/30 sessions from the WUSM and MDA datasets, respectively. Segmentation performance was quantified using the Dice Similarity Coefficient between the predicted and expert-refined tumor masks. Mean Dice scores were 0.882 (±\pm0.244) and 0.977 (±\pm0.04) for whole tumor segmentation for WUSM and MDA, respectively. This streamlined framework automatically curated, processed, and segmented raw MRI data of patients with varying grades of gliomas, enabling the curation of large-scale neuro-oncology datasets and demonstrating a high potential for integration as an assistive tool in clinical practice

    Heterogeneity Diffusion Imaging of gliomas: Initial experience and validation

    Get PDF
    OBJECTIVES: Primary brain tumors are composed of tumor cells, neural/glial tissues, edema, and vasculature tissue. Conventional MRI has a limited ability to evaluate heterogeneous tumor pathologies. We developed a novel diffusion MRI-based method-Heterogeneity Diffusion Imaging (HDI)-to simultaneously detect and characterize multiple tumor pathologies and capillary blood perfusion using a single diffusion MRI scan. METHODS: Seven adult patients with primary brain tumors underwent standard-of-care MRI protocols and HDI protocol before planned surgical resection and/or stereotactic biopsy. Twelve tumor sampling sites were identified using a neuronavigational system and recorded for imaging data quantification. Metrics from both protocols were compared between World Health Organization (WHO) II and III tumor groups. Cerebral blood volume (CBV) derived from dynamic susceptibility contrast (DSC) perfusion imaging was also compared with the HDI-derived perfusion fraction. RESULTS: The conventional apparent diffusion coefficient did not identify differences between WHO II and III tumor groups. HDI-derived slow hindered diffusion fraction was significantly elevated in the WHO III group as compared with the WHO II group. There was a non-significantly increasing trend of HDI-derived tumor cellularity fraction in the WHO III group, and both HDI-derived perfusion fraction and DSC-derived CBV were found to be significantly higher in the WHO III group. Both HDI-derived perfusion fraction and slow hindered diffusion fraction strongly correlated with DSC-derived CBV. Neither HDI-derived cellularity fraction nor HDI-derived fast hindered diffusion fraction correlated with DSC-derived CBV. CONCLUSIONS: Conventional apparent diffusion coefficient, which measures averaged pathology properties of brain tumors, has compromised accuracy and specificity. HDI holds great promise to accurately separate and quantify the tumor cell fraction, the tumor cell packing density, edema, and capillary blood perfusion, thereby leading to an improved microenvironment characterization of primary brain tumors. Larger studies will further establish HDI\u27s clinical value and use for facilitating biopsy planning, treatment evaluation, and noninvasive tumor grading

    Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training

    Get PDF
    Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ‘‘modality-agnostic training’’ technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach1 obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors

    Select Atrophied Regions in Alzheimer disease (SARA): An improved volumetric model for identifying Alzheimer disease dementia

    Get PDF
    INTRODUCTION: Volumetric biomarkers for Alzheimer disease (AD) are attractive due to their wide availability and ease of administration, but have traditionally shown lower diagnostic accuracy than measures of neuropathological contributors to AD. Our purpose was to optimize the diagnostic specificity of structural MRIs for AD using quantitative, data-driven techniques. METHODS: This retrospective study assembled several non-overlapping cohorts (total n = 1287) with publicly available data and clinical patients from Barnes-Jewish Hospital (data gathered 1990-2018). The Normal Aging Cohort (n = 383) contained amyloid biomarker negative, cognitively normal (CN) participants, and provided a basis for determining age-related atrophy in other cohorts. The Training (n = 216) and Test (n = 109) Cohorts contained participants with symptomatic AD and CN controls. Classification models were developed in the Training Cohort and compared in the Test Cohort using the receiver operating characteristics areas under curve (AUCs). Additional model comparisons were done in the Clinical Cohort (n = 579), which contained patients who were diagnosed with dementia due to various etiologies in a tertiary care outpatient memory clinic. RESULTS: While the Normal Aging Cohort showed regional age-related atrophy, classification models were not improved by including age as a predictor or by using volumetrics adjusted for age-related atrophy. The optimal model used multiple regions (hippocampal volume, inferior lateral ventricle volume, amygdala volume, entorhinal thickness, and inferior parietal thickness) and was able to separate AD and CN controls in the Test Cohort with an AUC of 0.961. In the Clinical Cohort, this model separated AD from non-AD diagnoses with an AUC 0.820, an incrementally greater separation of the cohort than by hippocampal volume alone (AUC of 0.801, p = 0.06). Greatest separation was seen for AD vs. frontotemporal dementia and for AD vs. non-neurodegenerative diagnoses. CONCLUSIONS: Volumetric biomarkers distinguished individuals with symptomatic AD from CN controls and other dementia types but were not improved by controlling for normal aging

    Gene-SGAN: a method for discovering disease subtypes with imaging and genetic signatures via multi-view weakly-supervised deep clustering

    Full text link
    Disease heterogeneity has been a critical challenge for precision diagnosis and treatment, especially in neurologic and neuropsychiatric diseases. Many diseases can display multiple distinct brain phenotypes across individuals, potentially reflecting disease subtypes that can be captured using MRI and machine learning methods. However, biological interpretability and treatment relevance are limited if the derived subtypes are not associated with genetic drivers or susceptibility factors. Herein, we describe Gene-SGAN - a multi-view, weakly-supervised deep clustering method - which dissects disease heterogeneity by jointly considering phenotypic and genetic data, thereby conferring genetic correlations to the disease subtypes and associated endophenotypic signatures. We first validate the generalizability, interpretability, and robustness of Gene-SGAN in semi-synthetic experiments. We then demonstrate its application to real multi-site datasets from 28,858 individuals, deriving subtypes of Alzheimer's disease and brain endophenotypes associated with hypertension, from MRI and SNP data. Derived brain phenotypes displayed significant differences in neuroanatomical patterns, genetic determinants, biological and clinical biomarkers, indicating potentially distinct underlying neuropathologic processes, genetic drivers, and susceptibility factors. Overall, Gene-SGAN is broadly applicable to disease subtyping and endophenotype discovery, and is herein tested on disease-related, genetically-driven neuroimaging phenotypes
    • …
    corecore