681 research outputs found

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Medical Image Segmentation Review: The success of U-Net

    Full text link
    Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm. U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical image modalities. Over the years, the U-Net model achieved tremendous attention from academic and industrial researchers. Several extensions of this network have been proposed to address the scale and complexity created by medical tasks. Addressing the deficiency of the naive U-Net model is the foremost step for vendors to utilize the proper U-Net variant model for their business. Having a compendium of different variants in one place makes it easier for builders to identify the relevant research. Also, for ML researchers it will help them understand the challenges of the biological tasks that challenge the model. To address this, we discuss the practical aspects of the U-Net model and suggest a taxonomy to categorize each network variant. Moreover, to measure the performance of these strategies in a clinical application, we propose fair evaluations of some unique and famous designs on well-known datasets. We provide a comprehensive implementation library with trained models for future research. In addition, for ease of future studies, we created an online list of U-Net papers with their possible official implementation. All information is gathered in https://github.com/NITR098/Awesome-U-Net repository.Comment: Submitted to the IEEE Transactions on Pattern Analysis and Machine Intelligence Journa

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    TISS-net: Brain tumor image synthesis and segmentation using cascaded dual-task networks and error-prediction consistency

    Get PDF
    Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods

    QU-BraTS: MICCAI BraTS 2020 challenge on quantifying uncertainty in brain tumor segmentation -- analysis of ranking metrics and benchmarking results

    Get PDF
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing metrics to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a metric developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This metric (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QUBraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTSResearch reported in this publication was partly supported by the Informatics Technology for Cancer Research (ITCR) program of the National Cancer Institute (NCI) of the National Institutes of Health (NIH), under award numbers NIH/NCI/ITCR:U01CA242871 and NIH/NCI/ITCR:U24CA189523. It was also partly supported by the National Institute of Neurological Disorders and Stroke (NINDS) of the NIH, under award number NIH/NINDS:R01NS042645.Document signat per 92 autors/autores: Raghav Mehta1 , Angelos Filos2 , Ujjwal Baid3,4,5 , Chiharu Sako3,4 , Richard McKinley6 , Michael Rebsamen6 , Katrin D¨atwyler6,53, Raphael Meier54, Piotr Radojewski6 , Gowtham Krishnan Murugesan7 , Sahil Nalawade7 , Chandan Ganesh7 , Ben Wagner7 , Fang F. Yu7 , Baowei Fei8 , Ananth J. Madhuranthakam7,9 , Joseph A. Maldjian7,9 , Laura Daza10, Catalina Gómez10, Pablo Arbeláez10, Chengliang Dai11, Shuo Wang11, Hadrien Raynaud11, Yuanhan Mo11, Elsa Angelini12, Yike Guo11, Wenjia Bai11,13, Subhashis Banerjee14,15,16, Linmin Pei17, Murat AK17, Sarahi Rosas-González18, Illyess Zemmoura18,52, Clovis Tauber18 , Minh H. Vu19, Tufve Nyholm19, Tommy L¨ofstedt20, Laura Mora Ballestar21, Veronica Vilaplana21, Hugh McHugh22,23, Gonzalo Maso Talou24, Alan Wang22,24, Jay Patel25,26, Ken Chang25,26, Katharina Hoebel25,26, Mishka Gidwani25, Nishanth Arun25, Sharut Gupta25 , Mehak Aggarwal25, Praveer Singh25, Elizabeth R. Gerstner25, Jayashree Kalpathy-Cramer25 , Nicolas Boutry27, Alexis Huard27, Lasitha Vidyaratne28, Md Monibor Rahman28, Khan M. Iftekharuddin28, Joseph Chazalon29, Elodie Puybareau29, Guillaume Tochon29, Jun Ma30 , Mariano Cabezas31, Xavier Llado31, Arnau Oliver31, Liliana Valencia31, Sergi Valverde31 , Mehdi Amian32, Mohammadreza Soltaninejad33, Andriy Myronenko34, Ali Hatamizadeh34 , Xue Feng35, Quan Dou35, Nicholas Tustison36, Craig Meyer35,36, Nisarg A. Shah37, Sanjay Talbar38, Marc-Andr Weber39, Abhishek Mahajan48, Andras Jakab47, Roland Wiest6,46 Hassan M. Fathallah-Shaykh45, Arash Nazeri40, Mikhail Milchenko140,44, Daniel Marcus40,44 , Aikaterini Kotrotsou43, Rivka Colen43, John Freymann41,42, Justin Kirby41,42, Christos Davatzikos3,4 , Bjoern Menze49,50, Spyridon Bakas∗3,4,5 , Yarin Gal∗2 , Tal Arbel∗1,51 // 1Centre for Intelligent Machines (CIM), McGill University, Montreal, QC, Canada, 2Oxford Applied and Theoretical Machine Learning (OATML) Group, University of Oxford, Oxford, England, 3Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA, 4Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA, 5Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA, 6Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland, 7Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA, 8Department of Bioengineering, University of Texas at Dallas, Texas, USA, 9Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX, USA, 10Universidad de los Andes, Bogotá, Colombia, 11Data Science Institute, Imperial College London, London, UK, 12NIHR Imperial BRC, ITMAT Data Science Group, Imperial College London, London, UK, 13Department of Brain Sciences, Imperial College London, London, UK, 14Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India, 15Department of CSE, University of Calcutta, Kolkata, India, 16 Division of Visual Information and Interaction (Vi2), Department of Information Technology, Uppsala University, Uppsala, Sweden, 17Department of Diagnostic Radiology, The University of Pittsburgh Medical Center, Pittsburgh, PA, USA, 18UMR U1253 iBrain, Université de Tours, Inserm, Tours, France, 19Department of Radiation Sciences, Ume˚a University, Ume˚a, Sweden, 20Department of Computing Science, Ume˚a University, Ume˚a, Sweden, 21Signal Theory and Communications Department, Universitat Politècnica de Catalunya, BarcelonaTech, Barcelona, Spain, 22Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand, 23Radiology Department, Auckland City Hospital, Auckland, New Zealand, 24Auckland Bioengineering Institute, University of Auckland, New Zealand, 25Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA, 26Massachusetts Institute of Technology, Cambridge, MA, USA, 27EPITA Research and Development Laboratory (LRDE), France, 28Vision Lab, Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA, 29EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicˆetre, France, 30School of Science, Nanjing University of Science and Technology, 31Research Institute of Computer Vision and Robotics, University of Girona, Spain, 32Department of Electrical and Computer Engineering, University of Tehran, Iran, 33School of Computer Science, University of Nottingham, UK, 34NVIDIA, Santa Clara, CA, US, 35Biomedical Engineering, University of Virginia, Charlottesville, USA, 36Radiology and Medical Imaging, University of Virginia, Charlottesville, USA, 37Department of Electrical Engineering, Indian Institute of Technology - Jodhpur, Jodhpur, India, 38SGGS ©2021 Mehta et al.. License: CC-BY 4.0. arXiv:2112.10074v1 [eess.IV] 19 Dec 2021 Mehta et al. Institute of Engineering and Technology, Nanded, India, 39Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center, 40Department of Radiology, Washington University, St. Louis, MO, USA, 41Leidos Biomedical Research, Inc, Frederick National Laboratory for Cancer Research, Frederick, MD, USA, 42Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA, 43Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA, 44Neuroimaging Informatics and Analysis Center, Washington University, St. Louis, MO, USA, 45Department of Neurology, The University of Alabama at Birmingham, Birmingham, AL, USA, 46Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland, 47Center for MR-Research, University Children’s Hospital Zurich, Zurich, Switzerland, 48Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India, 49Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland, 50Department of Informatics, Technical University of Munich, Munich, Germany, 51MILA - Quebec Artificial Intelligence Institute, Montreal, QC, Canada, 52Neurosurgery department, CHRU de Tours, Tours, France, 53 Human Performance Lab, Schulthess Clinic, Zurich, Switzerland, 54 armasuisse S+T, Thun, Switzerland.Preprin
    corecore