35 research outputs found

    Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning

    Full text link
    Multi-task neural network architectures provide a mechanism that jointly integrates information from distinct sources. It is ideal in the context of MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT) scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic multi-task network that estimates: 1) intrinsic uncertainty through a heteroscedastic noise model for spatially-adaptive task loss weighting and 2) parameter uncertainty through approximate Bayesian inference. This allows sampling of multiple segmentations and synCTs that share their network representation. We test our model on prostate cancer scans and show that it produces more accurate and consistent synCTs with a better estimation in the variance of the errors, state of the art results in OAR segmentation and a methodology for quality assurance in radiotherapy treatment planning.Comment: Early-accept at MICCAI 2018, 8 pages, 4 figure

    Brain MRI Tumor Segmentation with Adversarial Networks

    Get PDF
    Deep Learning is a promising approach to either automate or simplify several tasks in the healthcare domain. In this work, we introduce SegAN-CAT, an approach to brain tumor segmentation in Magnetic Resonance Images (MRI), based on Adversarial Networks. In particular, we extend SegAN, successfully applied to the same task in a previous work, in two respects: (i) we used a different model input and (ii) we employed a modified loss function to train the model. We tested our approach on two large datasets, made available by the Brain Tumor Image Segmentation Benchmark (BraTS). First, we trained and tested some segmentation models assuming the availability of all the major MRI contrast modalities, i.e., T1-weighted, T1 weighted contrast-enhanced, T2-weighted, and T2-FLAIR. However, as these four modalities are not always all available for each patient, we also trained and tested four segmentation models that take as input MRIs acquired only with a single contrast modality. Finally, we proposed to apply transfer learning across different contrast modalities to improve the performance of these single-modality models. Our results are promising and show that not SegAN-CAT is able to outperform SegAN when all the four modalities are available, but also that transfer learning can actually lead to better performances when only a single modality is available

    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation

    Full text link
    In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial infor- mation along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-S{\o}rensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.Comment: 9 pages, 4 figures, Accepted to 3D

    Influence of head positioning during cone-beam CT imaging on the accuracy of virtual 3D models

    Get PDF
    Objective: Cone beam computed tomography (CBCT) images are being increasingly used to acquire three- dimensional (3D) models of the skull for additive manufacturing purposes. However, the accuracy of such models remains a challenge, especially in the orbital area. The aim of this study is to assess the impact of four different CBCT imaging positions on the accuracy of the resulting 3D models in the orbital area. Methods: An anthropomorphic head phantom was manufactured by submerging a dry human skull in silicon to mimic the soft tissue attenuation and scattering properties of the human head. The phantom was scanned on a ProMax 3D MAX CBCT scanner using 90 and 120 kV for four different field of view positions: standard; elevated; backwards tilted; and forward tilted. All CBCT images were subsequently converted into 3D models and geometrically compared with a "gold- standard" optical scan of the dry skull. Results: Mean absolute deviations of the 3D models ranged between 0.15 +/- 0.11 mm and 0.56 +/- 0.28 mm. The elevated imaging position in combination with 120 kV tube voltage resulted in an improved representation of the orbital walls in the resulting 3D model without compromising the accuracy. Conclusions: Head positioning during CBCT imaging can influence the accuracy of the resulting 3D model. The accuracy of such models may be improved by positioning the region of interest (e.g. the orbital area) in the focal plane (Figure 2a) of the CBCT X- ray beam.Peer reviewe

    Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier

    Full text link
    Coronary artery centerline extraction in cardiac CT angiography (CCTA) images is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We propose an algorithm that extracts coronary artery centerlines in CCTA using a convolutional neural network (CNN). A 3D dilated CNN is trained to predict the most likely direction and radius of an artery at any given point in a CCTA image based on a local image patch. Starting from a single seed point placed manually or automatically anywhere in a coronary artery, a tracker follows the vessel centerline in two directions using the predictions of the CNN. Tracking is terminated when no direction can be identified with high certainty. The CNN was trained using 32 manually annotated centerlines in a training set consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08 challenge showed that extracted centerlines had an average overlap of 93.7% with 96 manually annotated reference centerlines. Extracted centerline points were highly accurate, with an average distance of 0.21 mm to reference centerline points. In a second test set consisting of 50 CCTA scans, 5,448 markers in the coronary arteries were used as seed points to extract single centerlines. This showed strong correspondence between extracted centerlines and manually placed markers. In a third test set containing 36 CCTA scans, fully automatic seeding and centerline extraction led to extraction of on average 92% of clinically relevant coronary artery segments. The proposed method is able to accurately and efficiently determine the direction and radius of coronary arteries. The method can be trained with limited training data, and once trained allows fast automatic or interactive extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi
    corecore