191 research outputs found
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
MRI brain tumor segmentation and uncertainty estimation using 3D-UNet architectures
Automation of brain tumor segmentation in 3D magnetic resonance images (MRIs) is key to assess the diagnostic and treatment of the disease. In recent years, convolutional neural networks (CNNs) have shown improved results in the task. However, high memory consumption is still a problem in 3D-CNNs. Moreover, most methods do not include uncertainty information, which is especially critical in medical diagnosis. This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data. The different trained models are then used to create an ensemble that leverages the properties of each model, thus increasing the performance. We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively. In addition, a hybrid approach is proposed that helps increase the accuracy of the segmentation. The model and uncertainty estimation measurements proposed in this work have been used in the BraTS’20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.This work has been partially supported by the project MALEGRA TEC2016-75976-R financed by the Spanish Ministerio de EconomĂa y Competitividad.Peer ReviewedPostprint (published version
Uncertainty-driven refinement of tumor-core segmentation using 3D-to-2D networks with label uncertainty
The BraTS dataset contains a mixture of high-grade and low-grade gliomas,
which have a rather different appearance: previous studies have shown that
performance can be improved by separated training on low-grade gliomas (LGGs)
and high-grade gliomas (HGGs), but in practice this information is not
available at test time to decide which model to use. By contrast with HGGs,
LGGs often present no sharp boundary between the tumor core and the surrounding
edema, but rather a gradual reduction of tumor-cell density.
Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which
ranked highly in the 2019 BraTS challenge and was trained using an
uncertainty-aware loss, we separate cases into those with a confidently
segmented core, and those with a vaguely segmented or missing core. Since by
assumption every tumor has a core, we reduce the threshold for classification
of core tissue in those cases where the core, as segmented by the classifier,
is vaguely defined or missing.
We then predict survival of high-grade glioma patients using a fusion of
linear regression and random forest classification, based on age, number of
distinct tumor components, and number of distinct tumor cores.
We present results on the validation dataset of the Multimodal Brain Tumor
Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on
the testing set, where the method achieved 4th place in Segmentation, 1st place
in uncertainty estimation, and 1st place in Survival prediction.Comment: Presented (virtually) in the MICCAI Brainles workshop 2020. Accepted
for publication in Brainles proceeding
- …