4,238 research outputs found

    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation

    Full text link
    In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial infor- mation along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-S{\o}rensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.Comment: 9 pages, 4 figures, Accepted to 3D

    DEEP LEARNING FOR VOLUMETRIC MEDICAL IMAGE SEGMENTATION

    Get PDF
    Over the past few decades, medical imaging techniques, e.g., computed tomography (CT), positron emission tomography (PET), have been widely used to improve the state of diagnosis, prognosis, and treatment of diseases. However, reading medical images and making diagnosis or treatment planning require well-trained medical specialists, which is labor-intensive, time-consuming, high-cost and error-prone. With the emerging of deep learning, doctors and researchers have started to benefit from medical image analysis in various applications, e.g., medical image registration, classification, detection and segmentation. Among these tasks, segmentation is the most common area of applying deep learning to medical imaging. How to improve medical diagnosis by advancing the segmentation in computer-aided diagnosis systems has become an active research topic. In this dissertation, we will address this topic in following aspects. (i) We propose a 3D-based coarse-to-fine framework to effectively and efficiently tackle the challenges of limited amount of annotated 3D data and limited computational resources in the field of volumetric medical image segmentation. (ii) We extend the 3D coarse-to-fine to be multi-scale to early detect the small but clinically important pancreatic ductal adenocarcinoma (PDAC) tumors, and provide radiologists with interpretable abnormality locations by segmentation-for-classification. (iii) We extend the segmentation-for-classification to screen pancreatic neuroendocrine (PNETs) tumors by incorporating dual-phase information and dilated pancreatic duct that is regarded as the sign of high risk for pancreatic cancer. (iv) Going further, we investigate the mainstream methodology in the segmentation area and then explore the novel idea of AutoML in the medical imaging field to automatically search the neural network architectures tailoring for the segmentation task, which further advances the medical image segmentation field. (v) Moving forward beyond pancreatic tumors, we are the first to address the clinically critical task of detecting, identifying and characterizing suspicious cancer metastasized lymph nodes (LNs) by proposing a 3D distance stratification strategy to simulate and simplify the high-level reasoning protocols conducted by radiation oncologists in a divide-and-conquer manner. (vi) The 3D distance stratification strategy is upgraded by our proposed multi-branch detection-by-segmentation, which further advances the finding, identifying and segmenting of metastasis-suspicious LNs

    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation

    Full text link
    We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach, which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures
    • …
    corecore