3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities

Abstract

Accurate, automated quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided diagnosis (CADx) systems to support the in- terpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, varying image spatial resolutions resulting from different scanner protocols, and the presence of blurring artefacts. This paper presents a novel computing ap- proach for automated organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal organ or muscle boundaries for every protrusion and indentation. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and iliopsoas muscles. We achieve quantitative measures of mean Dice similarity coefficient (DSC) that surpasses or are comparable with the state-of-the-art and demonstrate statistical stability. A qualitative evaluation performed by two independent experts in radiology and radiography verified the preservation of detailed organ and muscle boundaries

    Similar works