1,199 research outputs found
Which Contrast Does Matter? Towards a Deep Understanding of MR Contrast using Collaborative GAN
Thanks to the recent success of generative adversarial network (GAN) for
image synthesis, there are many exciting GAN approaches that successfully
synthesize MR image contrast from other images with different contrasts. These
approaches are potentially important for image imputation problems, where
complete set of data is often difficult to obtain and image synthesis is one of
the key solutions for handling the missing data problem. Unfortunately, the
lack of the scalability of the existing GAN-based image translation approaches
poses a fundamental challenge to understand the nature of the MR contrast
imputation problem: which contrast does matter? Here, we present a systematic
approach using Collaborative Generative Adversarial Networks (CollaGAN), which
enable the learning of the joint image manifold of multiple MR contrasts to
investigate which contrasts are essential. Our experimental results showed that
the exogenous contrast from contrast agents is not replaceable, but other
endogenous contrast such as T1, T2, etc can be synthesized from other contrast.
These findings may give important guidance to the acquisition protocol design
for MR in real clinical environment.Comment: 32 pages, 6 figure
Conditional Generative Refinement Adversarial Networks for Unbalanced Medical Image Semantic Segmentation
We propose a new generative adversarial architecture to mitigate imbalance
data problem in medical image semantic segmentation where the majority of
pixels belongs to a healthy region and few belong to lesion or non-health
region. A model trained with imbalanced data tends to bias toward healthy data
which is not desired in clinical applications and predicted outputs by these
networks have high precision and low sensitivity. We propose a new conditional
generative refinement network with three components: a generative, a
discriminative, and a refinement network to mitigate unbalanced data problem
through ensemble learning. The generative network learns to a segment at the
pixel level by getting feedback from the discriminative network according to
the true positive and true negative maps. On the other hand, the refinement
network learns to predict the false positive and the false negative masks
produced by the generative network that has significant value, especially in
medical application. The final semantic segmentation masks are then composed by
the output of the three networks. The proposed architecture shows
state-of-the-art results on LiTS-2017 for liver lesion segmentation, and two
microscopic cell segmentation datasets MDA231, PhC-HeLa. We have achieved
competitive results on BraTS-2017 for brain tumour segmentation
Missing MRI Pulse Sequence Synthesis using Multi-Modal Generative Adversarial Network
Magnetic resonance imaging (MRI) is being increasingly utilized to assess,
diagnose, and plan treatment for a variety of diseases. The ability to
visualize tissue in varied contrasts in the form of MR pulse sequences in a
single scan provides valuable insights to physicians, as well as enabling
automated systems performing downstream analysis. However many issues like
prohibitive scan time, image corruption, different acquisition protocols, or
allergies to certain contrast materials may hinder the process of acquiring
multiple sequences for a patient. This poses challenges to both physicians and
automated systems since complementary information provided by the missing
sequences is lost. In this paper, we propose a variant of generative
adversarial network (GAN) capable of leveraging redundant information contained
within multiple available sequences in order to generate one or more missing
sequences for a patient scan. The proposed network is designed as a
multi-input, multi-output network which combines information from all the
available pulse sequences, implicitly infers which sequences are missing, and
synthesizes the missing ones in a single forward pass. We demonstrate and
validate our method on two brain MRI datasets each with four sequences, and
show the applicability of the proposed method in simultaneously synthesizing
all missing sequences in any possible scenario where either one, two, or three
of the four sequences may be missing. We compare our approach with competing
unimodal and multi-modal methods, and show that we outperform both
quantitatively and qualitatively.Comment: Accepted for publication in IEEE Transactions on Medical Imagin
Red-GAN: Attacking class imbalance via conditioned generation. Yet another perspective on medical image synthesis for skin lesion dermoscopy and brain tumor MRI
Exploiting learning algorithms under scarce data regimes is a limitation and
a reality of the medical imaging field. In an attempt to mitigate the problem,
we propose a data augmentation protocol based on generative adversarial
networks. We condition the networks at a pixel-level (segmentation mask) and at
a global-level information (acquisition environment or lesion type). Such
conditioning provides immediate access to the image-label pairs while
controlling global class specific appearance of the synthesized images. To
stimulate synthesis of the features relevant for the segmentation task, an
additional passive player in a form of segmentor is introduced into the
adversarial game. We validate the approach on two medical datasets: BraTS,
ISIC. By controlling the class distribution through injection of synthetic
images into the training set we achieve control over the accuracy levels of the
datasets' classes
SegAN: Adversarial Network with Multi-scale Loss for Medical Image Segmentation
Inspired by classic generative adversarial networks (GAN), we propose a novel
end-to-end adversarial neural network, called SegAN, for the task of medical
image segmentation. Since image segmentation requires dense, pixel-level
labeling, the single scalar real/fake output of a classic GAN's discriminator
may be ineffective in producing stable and sufficient gradient feedback to the
networks. Instead, we use a fully convolutional neural network as the segmentor
to generate segmentation label maps, and propose a novel adversarial critic
network with a multi-scale loss function to force the critic and
segmentor to learn both global and local features that capture long- and
short-range spatial relationships between pixels. In our SegAN framework, the
segmentor and critic networks are trained in an alternating fashion in a
min-max game: The critic takes as input a pair of images, (original_image
predicted_label_map, original_image ground_truth_label_map), and then is
trained by maximizing a multi-scale loss function; The segmentor is trained
with only gradients passed along by the critic, with the aim to minimize the
multi-scale loss function. We show that such a SegAN framework is more
effective and stable for the segmentation task, and it leads to better
performance than the state-of-the-art U-net segmentation method. We tested our
SegAN method using datasets from the MICCAI BRATS brain tumor segmentation
challenge. Extensive experimental results demonstrate the effectiveness of the
proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance
comparable to the state-of-the-art for whole tumor and tumor core segmentation
while achieves better precision and sensitivity for Gd-enhance tumor core
segmentation; on BRATS 2015 SegAN achieves better performance than the
state-of-the-art in both dice score and precision
An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection
The identification of lesion within medical image data is necessary for
diagnosis, treatment and prognosis. Segmentation and classification approaches
are mainly based on supervised learning with well-paired image-level or
voxel-level labels. However, labeling the lesion in medical images is laborious
requiring highly specialized knowledge. We propose a medical image synthesis
model named abnormal-to-normal translation generative adversarial network
(ANT-GAN) to generate a normal-looking medical image based on its
abnormal-looking counterpart without the need for paired training data. Unlike
typical GANs, whose aim is to generate realistic samples with variations, our
more restrictive model aims at producing a normal-looking image corresponding
to one containing lesions, and thus requires a special design. Being able to
provide a "normal" counterpart to a medical image can provide useful side
information for medical imaging tasks like lesion segmentation or
classification validated by our experiments. In the other aspect, the ANT-GAN
model is also capable of producing highly realistic lesion-containing image
corresponding to the healthy one, which shows the potential in data
augmentation verified in our experiments.Comment: 10 pages, 13 figure
MRI Cross-Modality NeuroImage-to-NeuroImage Translation
We present a cross-modality generation framework that learns to generate
translated modalities from given modalities in MR images without real
acquisition. Our proposed method performs NeuroImage-to-NeuroImage translation
(abbreviated as N2N) by means of a deep learning model that leverages
conditional generative adversarial networks (cGANs). Our framework jointly
exploits the low-level features (pixel-wise information) and high-level
representations (e.g. brain tumors, brain structure like gray matter, etc.)
between cross modalities which are important for resolving the challenging
complexity in brain structures. Our framework can serve as an auxiliary method
in clinical diagnosis and has great application potential. Based on our
proposed framework, we first propose a method for cross-modality registration
by fusing the deformation fields to adopt the cross-modality information from
translated modalities. Second, we propose an approach for MRI segmentation,
translated multichannel segmentation (TMS), where given modalities, along with
translated modalities, are segmented by fully convolutional networks (FCN) in a
multichannel manner. Both of these two methods successfully adopt the
cross-modality information to improve the performance without adding any extra
data. Experiments demonstrate that our proposed framework advances the
state-of-the-art on five brain MRI datasets. We also observe encouraging
results in cross-modality registration and segmentation on some widely adopted
brain datasets. Overall, our work can serve as an auxiliary method in clinical
diagnosis and be applied to various tasks in medical fields.
Keywords: image-to-image, cross-modality, registration, segmentation, brain
MRIComment: 46 pages, 16 figure
Conditional Adversarial Network for Semantic Segmentation of Brain Tumor
Automated medical image analysis has a significant value in diagnosis and
treatment of lesions. Brain tumors segmentation has a special importance and
difficulty due to the difference in appearances and shapes of the different
tumor regions in magnetic resonance images. Additionally, the data sets are
heterogeneous and usually limited in size in comparison with the computer
vision problems. The recently proposed adversarial training has shown promising
results in generative image modeling. In this paper, we propose a novel
end-to-end trainable architecture for brain tumor semantic segmentation through
conditional adversarial training. We exploit conditional Generative Adversarial
Network (cGAN) and train a semantic segmentation Convolution Neural Network
(CNN) along with an adversarial network that discriminates segmentation maps
coming from the ground truth or from the segmentation network for BraTS 2017
segmentation task[15, 4, 2, 3]. We also propose an end-to-end trainable CNN for
survival day prediction based on deep learning techniques for BraTS 2017
prediction task [15, 4, 2, 3]. The experimental results demonstrate the
superior ability of the proposed approach for both tasks. The proposed model
achieves on validation data a DICE score, Sensitivity and Specificity
respectively 0.68, 0.99 and 0.98 for the whole tumor, regarding online judgment
system.Comment: Submitted to BraTS challenges which is part of MICCAI-201
Generative Adversarial Network in Medical Imaging: A Review
Generative adversarial networks have gained a lot of attention in the
computer vision community due to their capability of data generation without
explicitly modelling the probability density function. The adversarial loss
brought by the discriminator provides a clever way of incorporating unlabeled
samples into training and imposing higher order consistency. This has proven to
be useful in many cases, such as domain adaptation, data augmentation, and
image-to-image translation. These properties have attracted researchers in the
medical imaging community, and we have seen rapid adoption in many traditional
and novel applications, such as image reconstruction, segmentation, detection,
classification, and cross-modality synthesis. Based on our observations, this
trend will continue and we therefore conducted a review of recent advances in
medical imaging using the adversarial training scheme with the hope of
benefiting researchers interested in this technique.Comment: 24 pages; v4; added missing references from before Jan 1st 2019;
accepted to MedI
Medical Image Generation using Generative Adversarial Networks
Generative adversarial networks (GANs) are unsupervised Deep Learning
approach in the computer vision community which has gained significant
attention from the last few years in identifying the internal structure of
multimodal medical imaging data. The adversarial network simultaneously
generates realistic medical images and corresponding annotations, which proven
to be useful in many cases such as image augmentation, image registration,
medical image generation, image reconstruction, and image-to-image translation.
These properties bring the attention of the researcher in the field of medical
image analysis and we are witness of rapid adaption in many novel and
traditional applications. This chapter provides state-of-the-art progress in
GANs-based clinical application in medical image generation, and cross-modality
synthesis. The various framework of GANs which gained popularity in the
interpretation of medical images, such as Deep Convolutional GAN (DCGAN),
Laplacian GAN (LAPGAN), pix2pix, CycleGAN, and unsupervised image-to-image
translation model (UNIT), continue to improve their performance by
incorporating additional hybrid architecture, has been discussed. Further, some
of the recent applications of these frameworks for image reconstruction, and
synthesis, and future research directions in the area have been covered.Comment: 19 pages, 3 figures, 5 table
- …