147 research outputs found
Automated Brain Tumour Segmentation Using Deep Fully Residual Convolutional Neural Networks
Automated brain tumour segmentation has the potential of making a massive
improvement in disease diagnosis, surgery, monitoring and surveillance.
However, this task is extremely challenging. Here, we describe our automated
segmentation method using 2D CNNs that are based on U-Net. To deal with class
imbalance effectively, we have used a weighted Dice loss function. We found
that increasing the depth of the 'U' shape beyond a certain level results in a
decrease in performance, so it is essential to choose an optimum depth. We also
found that 3D contextual information cannot be captured by a single 2D network
that is trained with patches extracted from multiple views whereas an ensemble
of three 2D networks trained in multiple views can effectively capture the
information and deliver much better performance. We obtained Dice scores of
0.79 for enhancing tumour, 0.90 for whole tumour, and 0.82 for tumour core on
the BraTS 2018 validation set. Our method using 2D network consumes very less
time and memory, and is much simpler and easier to implement compared to the
state-of-the-art methods that used 3D networks; still, it manages to achieve
comparable performance to those methods
3D MRI brain tumor segmentation using autoencoder regularization
Automated segmentation of brain tumors from 3D magnetic resonance images
(MRIs) is necessary for the diagnosis, monitoring, and treatment planning of
the disease. Manual delineation practices require anatomical knowledge, are
expensive, time consuming and can be inaccurate due to human error. Here, we
describe a semantic segmentation network for tumor subregion segmentation from
3D MRIs based on encoder-decoder architecture. Due to a limited training
dataset size, a variational auto-encoder branch is added to reconstruct the
input image itself in order to regularize the shared decoder and impose
additional constraints on its layers. The current approach won 1st place in the
BraTS 2018 challenge
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
Multi-region segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks
Precise segmentation of bladder walls and tumor regions is an essential step
towards non-invasive identification of tumor stage and grade, which is critical
for treatment decision and prognosis of patients with bladder cancer (BC).
However, the automatic delineation of bladder walls and tumor in magnetic
resonance images (MRI) is a challenging task, due to important bladder shape
variations, strong intensity inhomogeneity in urine and very high variability
across population, particularly on tumors appearance. To tackle these issues,
we propose to use a deep fully convolutional neural network. The proposed
network includes dilated convolutions to increase the receptive field without
incurring extra cost nor degrading its performance. Furthermore, we introduce
progressive dilations in each convolutional block, thereby enabling extensive
receptive fields without the need for large dilation rates. The proposed
network is evaluated on 3.0T T2-weighted MRI scans from 60 pathologically
confirmed patients with BC. Experiments shows the proposed model to achieve
high accuracy, with a mean Dice similarity coefficient of 0.98, 0.84 and 0.69
for inner wall, outer wall and tumor region, respectively. These results
represent a very good agreement with reference contours and an increase in
performance compared to existing methods. In addition, inference times are less
than a second for a whole 3D volume, which is between 2-3 orders of magnitude
faster than related state-of-the-art methods for this application. We showed
that a CNN can yield precise segmentation of bladder walls and tumors in
bladder cancer patients on MRI. The whole segmentation process is
fully-automatic and yields results in very good agreement with the reference
standard, demonstrating the viability of deep learning models for the automatic
multi-region segmentation of bladder cancer MRI images.Comment: Published at the journal of Medical Physic
Magnetic resonance image-based brain tumour segmentation methods : a systematic review
Background:
Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development.
Purpose:
To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation.
Methods:
We conducted a systematic review of 572 brain tumour segmentation studies during 2015–2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score).
Statistical tests:
We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour.
Results:
We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation.
Conclusion:
U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available
Vox2Vox: 3D-GAN for Brain Tumour Segmentation
Gliomas are the most common primary brain malignancies, with different
degrees of aggressiveness, variable prognosis and various heterogeneous
histological sub-regions, i.e., peritumoral edema, necrotic core, enhancing and
non-enhancing tumour core. Although brain tumours can easily be detected using
multi-modal MRI, accurate tumor segmentation is a challenging task. Hence,
using the data provided by the BraTS Challenge 2020, we propose a 3D
volume-to-volume Generative Adversarial Network for segmentation of brain
tumours. The model, called Vox2Vox, generates realistic segmentation outputs
from multi-channel 3D MR images, segmenting the whole, core and enhancing tumor
with mean values of 87.20%, 81.14%, and 78.67% as dice scores and 6.44mm,
24.36mm, and 18.95mm for Hausdorff distance 95 percentile for the BraTS testing
set after ensembling 10 Vox2Vox models obtained with a 10-fold
cross-validation
UNETR: Transformers for 3D Medical Image Segmentation
Fully Convolutional Neural Networks (FCNNs) with contracting and expanding
paths have shown prominence for the majority of medical image segmentation
applications since the past decade. In FCNNs, the encoder plays an integral
role by learning both global and local features and contextual representations
which can be utilized for semantic output prediction by the decoder. Despite
their success, the locality of convolutional layers in FCNNs, limits the
capability of learning long-range spatial dependencies. Inspired by the recent
success of transformers for Natural Language Processing (NLP) in long-range
sequence learning, we reformulate the task of volumetric (3D) medical image
segmentation as a sequence-to-sequence prediction problem. We introduce a novel
architecture, dubbed as UNEt TRansformers (UNETR), that utilizes a transformer
as the encoder to learn sequence representations of the input volume and
effectively capture the global multi-scale information, while also following
the successful "U-shaped" network design for the encoder and decoder. The
transformer encoder is directly connected to a decoder via skip connections at
different resolutions to compute the final semantic segmentation output. We
have validated the performance of our method on the Multi Atlas Labeling Beyond
The Cranial Vault (BTCV) dataset for multi-organ segmentation and the Medical
Segmentation Decathlon (MSD) dataset for brain tumor and spleen segmentation
tasks. Our benchmarks demonstrate new state-of-the-art performance on the BTCV
leaderboard. Code: https://monai.io/research/unetrComment: 11 pages, 4 figure
Automatic Brain Tumour Segmentation and Biophysics-Guided Survival Prediction
Gliomas are the most common malignant brain tumourswith intrinsic
heterogeneity. Accurate segmentation of gliomas and theirsub-regions on
multi-parametric magnetic resonance images (mpMRI)is of great clinical
importance, which defines tumour size, shape andappearance and provides
abundant information for preoperative diag-nosis, treatment planning and
survival prediction. Recent developmentson deep learning have significantly
improved the performance of auto-mated medical image segmentation. In this
paper, we compare severalstate-of-the-art convolutional neural network models
for brain tumourimage segmentation. Based on the ensembled segmentation, we
presenta biophysics-guided prognostic model for patient overall survival
predic-tion which outperforms a data-driven radiomics approach. Our methodwon
the second place of the MICCAI 2019 BraTS Challenge for theoverall survival
prediction.Comment: MICCAI BraTS 2019 Challeng
Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net neural networks: a BraTS 2020 challenge solution
Brain tumor segmentation is a critical task for patient's disease management.
In order to automate and standardize this task, we trained multiple U-net like
neural networks, mainly with deep supervision and stochastic weight averaging,
on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training
dataset. Two independent ensembles of models from two different training
pipelines were trained, and each produced a brain tumor segmentation map. These
two labelmaps per patient were then merged, taking into account the performance
of each ensemble for specific tumor subregions. Our performance on the online
validation dataset with test time augmentation were as follows: Dice of 0.81,
0.91 and 0.85; Hausdorff (95%) of 20.6, 4,3, 5.7 mm for the enhancing tumor,
whole tumor and tumor core, respectively. Similarly, our solution achieved a
Dice of 0.79, 0.89 and 0.84, as well as Hausdorff (95%) of 20.4, 6.7 and 19.5mm
on the final test dataset, ranking us among the top ten teams. More complicated
training schemes and neural network architectures were investigated without
significant performance gain at the cost of greatly increased training time.
Overall, our approach yielded good and balanced performance for each tumor
subregion. Our solution is open sourced at
https://github.com/lescientifik/open_brats2020.Comment: BraTS 2020 proceedings (LNCS) pape
- …