353 research outputs found
Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans
The magnetic resonance (MR) analysis of brain tumors is widely used for
diagnosis and examination of tumor subregions. The overlapping area among the
intensity distribution of healthy, enhancing, non-enhancing, and edema regions
makes the automatic segmentation a challenging task. Here, we show that a
convolutional neural network trained on high-contrast images can transform the
intensity distribution of brain lesions in its internal subregions.
Specifically, a generative adversarial network (GAN) is extended to synthesize
high-contrast images. A comparison of these synthetic images and real images of
brain tumor tissue in MR scans showed significant segmentation improvement and
decreased the number of real channels for segmentation. The synthetic images
are used as a substitute for real channels and can bypass real modalities in
the multimodal brain tumor segmentation framework. Segmentation results on
BraTS 2019 dataset demonstrate that our proposed approach can efficiently
segment the tumor areas. In the end, we predict patient survival time based on
volumetric features of the tumor subregions as well as the age of each case
through several regression models
Deep Neural Network with l2-norm Unit for Brain Lesions Detection
Automated brain lesions detection is an important and very challenging
clinical diagnostic task because the lesions have different sizes, shapes,
contrasts, and locations. Deep Learning recently has shown promising progress
in many application fields, which motivates us to apply this technology for
such important problem. In this paper, we propose a novel and end-to-end
trainable approach for brain lesions classification and detection by using deep
Convolutional Neural Network (CNN). In order to investigate the applicability,
we applied our approach on several brain diseases including high and low-grade
glioma tumor, ischemic stroke, Alzheimer diseases, by which the brain Magnetic
Resonance Images (MRI) have been applied as an input for the analysis. We
proposed a new operating unit which receives features from several projections
of a subset units of the bottom layer and computes a normalized l2-norm for
next layer. We evaluated the proposed approach on two different CNN
architectures and number of popular benchmark datasets. The experimental
results demonstrate the superior ability of the proposed approach.Comment: Accepted for presentation in ICONIP-201
TuNet: End-to-end Hierarchical Brain Tumor Segmentation using Cascaded Networks
Glioma is one of the most common types of brain tumors; it arises in the
glial cells in the human brain and in the spinal cord. In addition to having a
high mortality rate, glioma treatment is also very expensive. Hence, automatic
and accurate segmentation and measurement from the early stages are critical in
order to prolong the survival rates of the patients and to reduce the costs of
the treatment. In the present work, we propose a novel end-to-end cascaded
network for semantic segmentation that utilizes the hierarchical structure of
the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation
modules after each convolution and concatenation block. By utilizing
cross-validation, an average ensemble technique, and a simple post-processing
technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff
Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor
core, and enhancing tumor, respectively, on the online test set.Comment: Accepted at MICCAI BrainLes 201
A Dynamic Programming Solution to Bounded Dejittering Problems
We propose a dynamic programming solution to image dejittering problems with
bounded displacements and obtain efficient algorithms for the removal of line
jitter, line pixel jitter, and pixel jitter.Comment: The final publication is available at link.springer.co
Learning Data Augmentation for Brain Tumor Segmentation with Coarse-to-Fine Generative Adversarial Networks
There is a common belief that the successful training of deep neural networks
requires many annotated training samples, which are often expensive and
difficult to obtain especially in the biomedical imaging field. While it is
often easy for researchers to use data augmentation to expand the size of
training sets, constructing and generating generic augmented data that is able
to teach the network the desired invariance and robustness properties using
traditional data augmentation techniques is challenging in practice. In this
paper, we propose a novel automatic data augmentation method that uses
generative adversarial networks to learn augmentations that enable machine
learning based method to learn the available annotated samples more
efficiently. The architecture consists of a coarse-to-fine generator to capture
the manifold of the training sets and generate generic augmented data. In our
experiments, we show the efficacy of our approach on a Magnetic Resonance
Imaging (MRI) image, achieving improvements of 3.5% Dice coefficient on the
BRATS15 Challenge dataset as compared to traditional augmentation approaches.
Also, our proposed method successfully boosts a common segmentation network to
reach the state-of-the-art performance on the BRATS15 Challenge
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation
We propose a new deep learning method for tumour segmentation when dealing
with missing imaging modalities. Instead of producing one network for each
possible subset of observed modalities or using arithmetic operations to
combine feature maps, our hetero-modal variational 3D encoder-decoder
independently embeds all observed modalities into a shared latent
representation. Missing data and tumour segmentation can be then generated from
this embedding. In our scenario, the input is a random subset of modalities. We
demonstrate that the optimisation problem can be seen as a mixture sampling. In
addition to this, we introduce a new network architecture building upon both
the 3D U-Net and the Multi-Modal Variational Auto-Encoder (MVAE). Finally, we
evaluate our method on BraTS2018 using subsets of the imaging modalities as
input. Our model outperforms the current state-of-the-art method for dealing
with missing modalities and achieves similar performance to the subset-specific
equivalent networks.Comment: Accepted at MICCAI 201
Predicting the Location of Glioma Recurrence After a Resection Surgery
International audienceWe propose a method for estimating the location of glioma recurrence after surgical resection. This method consists of a pipeline including the registration of images at different time points, the estimation of the tumor infiltration map, and the prediction of tumor regrowth using a reaction-diffusion model. A data set acquired on a patient with a low-grade glioma and post surgery MRIs is considered to evaluate the accuracy of the estimated recurrence locations found using our method. We observed good agreement in tumor volume prediction and qualitative matching in regrowth locations. Therefore, the proposed method seems adequate for modeling low-grade glioma recurrence. This tool could help clinicians anticipate tumor regrowth and better characterize the radiologically non-visible infiltrative extent of the tumor. Such information could pave the way for model-based personalization of treatment planning in a near future
Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans
Structural magnetic resonance imaging (MRI) has been widely utilized for
analysis and diagnosis of brain diseases. Automatic segmentation of brain
tumors is a challenging task for computer-aided diagnosis due to low-tissue
contrast in the tumor subregions. To overcome this, we devise a novel
pixel-wise segmentation framework through a convolutional 3D to 2D MR patch
conversion model to predict class labels of the central pixel in the input
sliding patches. Precisely, we first extract 3D patches from each modality to
calibrate slices through the squeeze and excitation (SE) block. Then, the
output of the SE block is fed directly into subsequent bottleneck layers to
reduce the number of channels. Finally, the calibrated 2D slices are
concatenated to obtain multimodal features through a 2D convolutional neural
network (CNN) for prediction of the central pixel. In our architecture, both
local inter-slice and global intra-slice features are jointly exploited to
predict class label of the central voxel in a given patch through the 2D CNN
classifier. We implicitly apply all modalities through trainable parameters to
assign weights to the contributions of each sequence for segmentation.
Experimental results on the segmentation of brain tumors in multimodal MRI
scans (BraTS'19) demonstrate that our proposed method can efficiently segment
the tumor regions
- …