637 research outputs found

    MDS-Net: A Model-Driven Stack-Based Fully Convolutional Network for Pancreas Segmentation

    Full text link
    The irregular geometry and high inter-slice variability in computerized tomography (CT) scans of the human pancreas make an accurate segmentation of this crucial organ a challenging task for existing data-driven deep learning methods. To address this problem, we present a novel model-driven stack-based fully convolutional network with a bi-directional convolutional long short-term memory network for pancreas segmentation, termed MDS-Net. The MDS-Net's cost function includes data approximation term and prior knowledge regularization term combined with a stack scheme for capturing and fusing the two-dimensional (2D) and local three-dimensional (3D) context information. Specifically, 3D CT scans are divided into multiple stacks, and each multi-slice stack is used as a basic unit for network training and modeling of the local spatial context. To highlight the importance of single slices in segmentation, the inter-slice relationships in the stack data are also incorporated in the MDS-Net framework. For implementing this proposed model-driven method, we create a stack-based U-Net architecture and successfully derive its back-propagation procedure for end-to-end training. Furthermore, a bi-directional convolutional long short-term memory (BiCLSTM) network is utilized to integrate upper and lower slice information, thereby ensuring the consistency of adjacent CT slices and intra-stack. Finally, extensive quantitative assessments on the NIH Pancreas-CT dataset demonstrated higher pancreatic segmentation accuracy and reliability of MDS-Net compared to other state-of-the-art methods

    A deep level set method for image segmentation

    Full text link
    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types of medical imaging data (liver CT and left ven-tricle MRI data), we show that the integrated method achieves goodperformance even when little training data is available, outperformingthe FCN or the level set model alone

    Isointense Infant Brain Segmentation with a Hyper-dense Connected Convolutional Neural Network

    Full text link
    Neonatal brain segmentation in magnetic resonance (MR) is a challenging problem due to poor image quality and low contrast between white and gray matter regions. Most existing approaches for this problem are based on multi-atlas label fusion strategies, which are time-consuming and sensitive to registration errors. As alternative to these methods, we propose a hyper-densely connected 3D convolutional neural network that employs MR-T1 and T2 images as input, which are processed independently in two separated paths. An important difference with previous densely connected networks is the use of direct connections between layers from the same and different paths. Adopting such dense connectivity helps the learning process by including deep supervision and improving gradient flow. We evaluated our approach on data from the MICCAI Grand Challenge on 6-month infant Brain MRI Segmentation (iSEG), obtaining very competitive results. Among 21 teams, our approach ranked first or second in most metrics, translating into a state-of-the-art performance.Comment: Oral presentation at ISBI 2018. The last version of the paper is updated with the reference of the iSEG comparative study, published in 2019 at IEEE TM

    DeepNAT: Deep Convolutional Neural Network for Segmenting Neuroanatomy

    Full text link
    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7 million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future.Comment: Accepted for publication in NeuroImage, special issue "Brain Segmentation and Parcellation", 201

    Deep Embedding Convolutional Neural Network for Synthesizing CT Image from T1-Weighted MR Image

    Full text link
    Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeat-ing this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate datasets, by also compar-ing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding op-erations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Liver segmentation using 3D CT scans.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file

    A Cross-Stitch Architecture for Joint Registration and Segmentation in Adaptive Radiotherapy

    Full text link
    Recently, joint registration and segmentation has been formulated in a deep learning setting, by the definition of joint loss functions. In this work, we investigate joining these tasks at the architectural level. We propose a registration network that integrates segmentation propagation between images, and a segmentation network to predict the segmentation directly. These networks are connected into a single joint architecture via so-called cross-stitch units, allowing information to be exchanged between the tasks in a learnable manner. The proposed method is evaluated in the context of adaptive image-guided radiotherapy, using daily prostate CT imaging. Two datasets from different institutes and manufacturers were involved in the study. The first dataset was used for training (12 patients) and validation (6 patients), while the second dataset was used as an independent test set (14 patients). In terms of mean surface distance, our approach achieved 1.06±0.31.06 \pm 0.3 mm, 0.91±0.40.91 \pm 0.4 mm, 1.27±0.41.27 \pm 0.4 mm, and 1.76±0.81.76 \pm 0.8 mm on the validation set and 1.82±2.41.82 \pm 2.4 mm, 2.45±2.42.45 \pm 2.4 mm, 2.45±5.02.45 \pm 5.0 mm, and 2.57±2.32.57 \pm 2.3 mm on the test set for the prostate, bladder, seminal vesicles, and rectum, respectively. The proposed multi-task network outperformed single-task networks, as well as a network only joined through the loss function, thus demonstrating the capability to leverage the individual strengths of the segmentation and registration tasks. The obtained performance as well as the inference speed make this a promising candidate for daily re-contouring in adaptive radiotherapy, potentially reducing treatment-related side effects and improving quality-of-life after treatment.Comment: Accepted to MIDL 202

    Deep Geodesic Learning for Segmentation and Anatomical Landmarking

    Full text link
    In this paper, we propose a novel deep learning framework for anatomy segmentation and automatic landmark- ing. Specifically, we focus on the challenging problem of mandible segmentation from cone-beam computed tomography (CBCT) scans and identification of 9 anatomical landmarks of the mandible on the geodesic space. The overall approach employs three inter-related steps. In step 1, we propose a deep neu- ral network architecture with carefully designed regularization, and network hyper-parameters to perform image segmentation without the need for data augmentation and complex post- processing refinement. In step 2, we formulate the landmark localization problem directly on the geodesic space for sparsely- spaced anatomical landmarks. In step 3, we propose to use a long short-term memory (LSTM) network to identify closely- spaced landmarks, which is rather difficult to obtain using other standard detection networks. The proposed fully automated method showed superior efficacy compared to the state-of-the- art mandible segmentation and landmarking approaches in craniofacial anomalies and diseased states. We used a very challenging CBCT dataset of 50 patients with a high-degree of craniomaxillofacial (CMF) variability that is realistic in clinical practice. Complementary to the quantitative analysis, the qualitative visual inspection was conducted for distinct CBCT scans from 250 patients with high anatomical variability. We have also shown feasibility of the proposed work in an independent dataset from MICCAI Head-Neck Challenge (2015) achieving the state-of-the-art performance. Lastly, we present an in-depth analysis of the proposed deep networks with respect to the choice of hyper-parameters such as pooling and activation functions.Comment: 14 pages, 12 Figures, IEEE Transactions on Medical Imaging 2018, TMI-2018-0898.R

    Improving nuclear medicine with deep learning and explainability: two real-world use cases in parkinsonian syndrome and safety dosimetry

    Get PDF
    Computer vision in the area of medical imaging has rapidly improved during recent years as a consequence of developments in deep learning and explainability algorithms. In addition, imaging in nuclear medicine is becoming increasingly sophisticated, with the emergence of targeted radiotherapies that enable treatment and imaging on a molecular level (“theranostics”) where radiolabeled targeted molecules are directly injected into the bloodstream. Based on our recent work, we present two use-cases in nuclear medicine as follows: first, the impact of automated organ segmentation required for personalized dosimetry in patients with neuroendocrine tumors and second, purely data-driven identification and verification of brain regions for diagnosis of Parkinson’s disease. Convolutional neural network was used for automated organ segmentation on computed tomography images. The segmented organs were used for calculation of the energy deposited into the organ-at-risk for patients treated with a radiopharmaceutical. Our method resulted in faster and cheaper dosimetry and only differed by 7% from dosimetry performed by two medical physicists. The identification of brain regions, however was analyzed on dopamine-transporter single positron emission tomography images using convolutional neural network and explainability, i.e., layer-wise relevance propagation algorithm. Our findings confirm that the extra-striatal brain regions, i.e., insula, amygdala, ventromedial prefrontal cortex, thalamus, anterior temporal cortex, superior frontal lobe, and pons contribute to the interpretation of images beyond the striatal regions. In current common diagnostic practice, however, only the striatum is the reference region, while extra-striatal regions are neglected. We further demonstrate that deep learning-based diagnosis combined with explainability algorithm can be recommended to support interpretation of this image modality in clinical routine for parkinsonian syndromes, with a total computation time of three seconds which is compatible with busy clinical workflow. Overall, this thesis shows for the first time that deep learning with explainability can achieve results competitive with human performance and generate novel hypotheses, thus paving the way towards improved diagnosis and treatment in nuclear medicine
    • …
    corecore