50 research outputs found

    Brain Morphometry Estimation: From Hours to Seconds Using Deep Learning.

    Get PDF
    Motivation: Brain morphometry from magnetic resonance imaging (MRI) is a promising neuroimaging biomarker for the non-invasive diagnosis and monitoring of neurodegenerative and neurological disorders. Current tools for brain morphometry often come with a high computational burden, making them hard to use in clinical routine, where time is often an issue. We propose a deep learning-based approach to predict the volumes of anatomically delineated subcortical regions of interest (ROI), and mean thicknesses and curvatures of cortical parcellations directly from T1-weighted MRI. Advantages are the timely availability of results while maintaining a clinically relevant accuracy. Materials and Methods: An anonymized dataset of 574 subjects (443 healthy controls and 131 patients with epilepsy) was used for the supervised training of a convolutional neural network (CNN). A silver-standard ground truth was generated with FreeSurfer 6.0. Results: The CNN predicts a total of 165 morphometric measures directly from raw MR images. Analysis of the results using intraclass correlation coefficients showed, in general, good correlation with FreeSurfer generated ground truth data, with some of the regions nearly reaching human inter-rater performance (ICC > 0.75). Cortical thicknesses predicted by the CNN showed cross-sectional annual age-related gray matter atrophy rates both globally (thickness change of -0.004 mm/year) and regionally in agreement with the literature. A statistical test to dichotomize patients with epilepsy from healthy controls revealed similar effect sizes for structures affecting all subtypes as reported in a large-scale epilepsy study. Conclusions: We demonstrate the general feasibility of using deep learning to estimate human brain morphometry directly from T1-weighted MRI within seconds. A comparison of the results to other publications shows accuracies of comparable magnitudes for the subcortical volumes and cortical thicknesses

    A Quantitative Imaging Biomarker Supporting Radiological Assessment of Hippocampal Sclerosis Derived From Deep Learning-Based Segmentation of T1w-MRI.

    Get PDF
    Purpose Hippocampal volumetry is an important biomarker to quantify atrophy in patients with mesial temporal lobe epilepsy. We investigate the sensitivity of automated segmentation methods to support radiological assessments of hippocampal sclerosis (HS). Results from FreeSurfer and FSL-FIRST are contrasted to a deep learning (DL)-based segmentation method. Materials and Methods We used T1-weighted MRI scans from 105 patients with epilepsy and 354 healthy controls. FreeSurfer, FSL, and a DL-based method were applied for brain anatomy segmentation. We calculated effect sizes (Cohen's d) between left/right HS and healthy controls based on the asymmetry of hippocampal volumes. Additionally, we derived 14 shape features from the segmentations and determined the most discriminating feature to identify patients with hippocampal sclerosis by a support vector machine (SVM). Results Deep learning-based segmentation of the hippocampus was the most sensitive to detecting HS. The effect sizes of the volume asymmetries were larger with the DL-based segmentations (HS left d= -4.2, right = 4.2) than with FreeSurfer (left= -3.1, right = 3.7) and FSL (left= -2.3, right = 2.5). For the classification based on the shape features, the surface-to-volume ratio was identified as the most important feature. Its absolute asymmetry yielded a higher area under the curve (AUC) for the deep learning-based segmentation (AUC = 0.87) than for FreeSurfer (0.85) and FSL (0.78) to dichotomize HS from other epilepsy cases. The robustness estimated from repeated scans was statistically significantly higher with DL than all other methods. Conclusion Our findings suggest that deep learning-based segmentation methods yield a higher sensitivity to quantify hippocampal sclerosis than atlas-based methods and derived shape features are more robust. We propose an increased asymmetry in the surface-to-volume ratio of the hippocampus as an easy-to-interpret quantitative imaging biomarker for HS

    Growing importance of brain morphometry analysis in the clinical routine: The hidden impact of MR sequence parameters.

    Get PDF
    Volumetric assessment based on structural MRI is increasingly recognized as an auxiliary tool to visual reading, also in examinations acquired in the clinical routine. However, MRI acquisition parameters can significantly influence these measures, which must be considered when interpreting the results on an individual patient level. This Technical Note shall demonstrate the problem. Using data from a dedicated experiment, we show the influence of two crucial sequence parameters on the GM/WM contrast and their impact on the measured volumes. A simulated contrast derived from acquisition parameters TI/TR may serve as surrogate and is highly correlated (r=0.96) with the measured contrast

    Hippocampal volume in patients with bilateral and unilateral peripheral vestibular dysfunction.

    Get PDF
    Previous studies have found that peripheral vestibular dysfunction is associated with altered volumes in different brain structures, especially in the hippocampus. However, published evidence is conflicting. Based on previous findings, we compared hippocampal volume, as well as supramarginal, superior temporal, and postcentral gyrus in a sample of 55 patients with different conditions of peripheral vestibular dysfunction (bilateral, chronic unilateral, acute unilateral) to 39 age- and sex-matched healthy controls. In addition, we explored deviations in gray-matter volumes in hippocampal subfields. We also analysed correlations between morphometric data and visuo-spatial performance. Patients with vestibular dysfunction did not differ in total hippocampal volume from healthy controls. However, a reduced volume in the right presubiculum of the hippocampus and the left supramarginal gyrus was observed in patients with chronic and acute unilateral vestibular dysfunction, but not in patients with bilateral vestibular dysfunction. No association of altered volumes with visuo-spatial performance was found. An asymmetric vestibular input due to unilateral vestibular dysfunction might lead to reduced central brain volumes that are involved in vestibular processing

    Large-scale transient peri-ictal perfusion magnetic resonance imaging abnormalities detected by quantitative image analysis.

    Get PDF
    Epileptic seizures require a rapid and safe diagnosis to minimize the time from onset to adequate treatment. Some epileptic seizures can be diagnosed clinically with the respective expertise. For more subtle seizures, imaging is mandatory to rule out treatable structural lesions and potentially life-threatening conditions. MRI perfusion abnormalities associated with epileptic seizures have been reported in CT and MRI studies. However, the interpretation of transient peri-ictal MRI abnormalities is routinely based on qualitative visual analysis and therefore reader dependent. In this retrospective study, we investigated the diagnostic yield of visual analysis of perfusion MRI during ictal and postictal states based on comparative expert ratings in 51 patients. We further propose an automated semi-quantitative method for perfusion analysis to determine perfusion abnormalities observed during ictal and postictal MRI using dynamic susceptibility contrast MRI, which we validated on a subcohort of 27 patients. The semi-quantitative method provides a parcellation of 3D T1-weighted images into 32 standardized cortical regions of interests and subcortical grey matter structures based on a recently proposed method, direct cortical thickness estimation using deep learning-based anatomy segmentation and cortex parcellation for brain anatomy segmentation. Standard perfusion maps from a Food and Drug Administration-approved image analysis tool (Olea Sphere 3.0) were co-registered and investigated for region-wise differences between ictal and postictal states. These results were compared against the visual analysis of two readers experienced in functional image analysis in epilepsy. In the ictal group, cortical hyperperfusion was present in 17/18 patients (94% sensitivity), whereas in the postictal cohort, cortical hypoperfusion was present only in 9/33 (27%) patients while 24/33 (73%) showed normal perfusion. The (semi-)quantitative dynamic susceptibility contrast MRI perfusion analysis indicated increased thalamic perfusion in the ictal cohort and hypoperfusion in the postictal cohort. Visual ratings between expert readers performed well on the patient level, but visual rating agreement was low for analysis of subregions of the brain. The asymmetry of the automated image analysis correlated significantly with the visual consensus ratings of both readers. We conclude that expert analysis of dynamic susceptibility contrast MRI effectively discriminates ictal versus postictal perfusion patterns. Automated perfusion evaluation revealed favourable interpretability and correlated well with the classification of the visual ratings. It may therefore be employed for high-throughput, large-scale perfusion analysis in extended cohorts, especially for research questions with limited expert rater capacity

    QU-BraTS: MICCAI BraTS 2020 challenge on quantifying uncertainty in brain tumor segmentation -- analysis of ranking metrics and benchmarking results

    Get PDF
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing metrics to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a metric developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This metric (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QUBraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTSResearch reported in this publication was partly supported by the Informatics Technology for Cancer Research (ITCR) program of the National Cancer Institute (NCI) of the National Institutes of Health (NIH), under award numbers NIH/NCI/ITCR:U01CA242871 and NIH/NCI/ITCR:U24CA189523. It was also partly supported by the National Institute of Neurological Disorders and Stroke (NINDS) of the NIH, under award number NIH/NINDS:R01NS042645.Document signat per 92 autors/autores: Raghav Mehta1 , Angelos Filos2 , Ujjwal Baid3,4,5 , Chiharu Sako3,4 , Richard McKinley6 , Michael Rebsamen6 , Katrin D¨atwyler6,53, Raphael Meier54, Piotr Radojewski6 , Gowtham Krishnan Murugesan7 , Sahil Nalawade7 , Chandan Ganesh7 , Ben Wagner7 , Fang F. Yu7 , Baowei Fei8 , Ananth J. Madhuranthakam7,9 , Joseph A. Maldjian7,9 , Laura Daza10, Catalina Gómez10, Pablo Arbeláez10, Chengliang Dai11, Shuo Wang11, Hadrien Raynaud11, Yuanhan Mo11, Elsa Angelini12, Yike Guo11, Wenjia Bai11,13, Subhashis Banerjee14,15,16, Linmin Pei17, Murat AK17, Sarahi Rosas-González18, Illyess Zemmoura18,52, Clovis Tauber18 , Minh H. Vu19, Tufve Nyholm19, Tommy L¨ofstedt20, Laura Mora Ballestar21, Veronica Vilaplana21, Hugh McHugh22,23, Gonzalo Maso Talou24, Alan Wang22,24, Jay Patel25,26, Ken Chang25,26, Katharina Hoebel25,26, Mishka Gidwani25, Nishanth Arun25, Sharut Gupta25 , Mehak Aggarwal25, Praveer Singh25, Elizabeth R. Gerstner25, Jayashree Kalpathy-Cramer25 , Nicolas Boutry27, Alexis Huard27, Lasitha Vidyaratne28, Md Monibor Rahman28, Khan M. Iftekharuddin28, Joseph Chazalon29, Elodie Puybareau29, Guillaume Tochon29, Jun Ma30 , Mariano Cabezas31, Xavier Llado31, Arnau Oliver31, Liliana Valencia31, Sergi Valverde31 , Mehdi Amian32, Mohammadreza Soltaninejad33, Andriy Myronenko34, Ali Hatamizadeh34 , Xue Feng35, Quan Dou35, Nicholas Tustison36, Craig Meyer35,36, Nisarg A. Shah37, Sanjay Talbar38, Marc-Andr Weber39, Abhishek Mahajan48, Andras Jakab47, Roland Wiest6,46 Hassan M. Fathallah-Shaykh45, Arash Nazeri40, Mikhail Milchenko140,44, Daniel Marcus40,44 , Aikaterini Kotrotsou43, Rivka Colen43, John Freymann41,42, Justin Kirby41,42, Christos Davatzikos3,4 , Bjoern Menze49,50, Spyridon Bakas∗3,4,5 , Yarin Gal∗2 , Tal Arbel∗1,51 // 1Centre for Intelligent Machines (CIM), McGill University, Montreal, QC, Canada, 2Oxford Applied and Theoretical Machine Learning (OATML) Group, University of Oxford, Oxford, England, 3Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA, 4Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA, 5Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA, 6Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland, 7Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA, 8Department of Bioengineering, University of Texas at Dallas, Texas, USA, 9Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX, USA, 10Universidad de los Andes, Bogotá, Colombia, 11Data Science Institute, Imperial College London, London, UK, 12NIHR Imperial BRC, ITMAT Data Science Group, Imperial College London, London, UK, 13Department of Brain Sciences, Imperial College London, London, UK, 14Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India, 15Department of CSE, University of Calcutta, Kolkata, India, 16 Division of Visual Information and Interaction (Vi2), Department of Information Technology, Uppsala University, Uppsala, Sweden, 17Department of Diagnostic Radiology, The University of Pittsburgh Medical Center, Pittsburgh, PA, USA, 18UMR U1253 iBrain, Université de Tours, Inserm, Tours, France, 19Department of Radiation Sciences, Ume˚a University, Ume˚a, Sweden, 20Department of Computing Science, Ume˚a University, Ume˚a, Sweden, 21Signal Theory and Communications Department, Universitat Politècnica de Catalunya, BarcelonaTech, Barcelona, Spain, 22Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand, 23Radiology Department, Auckland City Hospital, Auckland, New Zealand, 24Auckland Bioengineering Institute, University of Auckland, New Zealand, 25Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA, 26Massachusetts Institute of Technology, Cambridge, MA, USA, 27EPITA Research and Development Laboratory (LRDE), France, 28Vision Lab, Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA, 29EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicˆetre, France, 30School of Science, Nanjing University of Science and Technology, 31Research Institute of Computer Vision and Robotics, University of Girona, Spain, 32Department of Electrical and Computer Engineering, University of Tehran, Iran, 33School of Computer Science, University of Nottingham, UK, 34NVIDIA, Santa Clara, CA, US, 35Biomedical Engineering, University of Virginia, Charlottesville, USA, 36Radiology and Medical Imaging, University of Virginia, Charlottesville, USA, 37Department of Electrical Engineering, Indian Institute of Technology - Jodhpur, Jodhpur, India, 38SGGS ©2021 Mehta et al.. License: CC-BY 4.0. arXiv:2112.10074v1 [eess.IV] 19 Dec 2021 Mehta et al. Institute of Engineering and Technology, Nanded, India, 39Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center, 40Department of Radiology, Washington University, St. Louis, MO, USA, 41Leidos Biomedical Research, Inc, Frederick National Laboratory for Cancer Research, Frederick, MD, USA, 42Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA, 43Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA, 44Neuroimaging Informatics and Analysis Center, Washington University, St. Louis, MO, USA, 45Department of Neurology, The University of Alabama at Birmingham, Birmingham, AL, USA, 46Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland, 47Center for MR-Research, University Children’s Hospital Zurich, Zurich, Switzerland, 48Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India, 49Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland, 50Department of Informatics, Technical University of Munich, Munich, Germany, 51MILA - Quebec Artificial Intelligence Institute, Montreal, QC, Canada, 52Neurosurgery department, CHRU de Tours, Tours, France, 53 Human Performance Lab, Schulthess Clinic, Zurich, Switzerland, 54 armasuisse S+T, Thun, Switzerland.Preprin

    Reliable brain morphometry from contrast-enhanced T1w-MRI in patients with multiple sclerosis.

    Get PDF
    Brain morphometry is usually based on non-enhanced (pre-contrast) T1-weighted MRI. However, such dedicated protocols are sometimes missing in clinical examinations. Instead, an image with a contrast agent is often available. Existing tools such as FreeSurfer yield unreliable results when applied to contrast-enhanced (CE) images. Consequently, these acquisitions are excluded from retrospective morphometry studies, which reduces the sample size. We hypothesize that deep learning (DL)-based morphometry methods can extract morphometric measures also from contrast-enhanced MRI. We have extended DL+DiReCT to cope with contrast-enhanced MRI. Training data for our DL-based model were enriched with non-enhanced and CE image pairs from the same session. The segmentations were derived with FreeSurfer from the non-enhanced image and used as ground truth for the coregistered CE image. A longitudinal dataset of patients with multiple sclerosis (MS), comprising relapsing remitting (RRMS) and primary progressive (PPMS) subgroups, was used for the evaluation. Global and regional cortical thickness derived from non-enhanced and CE images were contrasted to results from FreeSurfer. Correlation coefficients of global mean cortical thickness between non-enhanced and CE images were significantly larger with DL+DiReCT (r = 0.92) than with FreeSurfer (r = 0.75). When comparing the longitudinal atrophy rates between the two MS subgroups, the effect sizes between PPMS and RRMS were higher with DL+DiReCT both for non-enhanced (d = -0.304) and CE images (d = -0.169) than for FreeSurfer (non-enhanced d = -0.111, CE d = 0.085). In conclusion, brain morphometry can be derived reliably from contrast-enhanced MRI using DL-based morphometry tools, making additional cases available for analysis and potential future diagnostic morphometry tools
    corecore