5,216 research outputs found
Generative Adversarial Network in Medical Imaging: A Review
Generative adversarial networks have gained a lot of attention in the
computer vision community due to their capability of data generation without
explicitly modelling the probability density function. The adversarial loss
brought by the discriminator provides a clever way of incorporating unlabeled
samples into training and imposing higher order consistency. This has proven to
be useful in many cases, such as domain adaptation, data augmentation, and
image-to-image translation. These properties have attracted researchers in the
medical imaging community, and we have seen rapid adoption in many traditional
and novel applications, such as image reconstruction, segmentation, detection,
classification, and cross-modality synthesis. Based on our observations, this
trend will continue and we therefore conducted a review of recent advances in
medical imaging using the adversarial training scheme with the hope of
benefiting researchers interested in this technique.Comment: 24 pages; v4; added missing references from before Jan 1st 2019;
accepted to MedI
Medical Image Generation using Generative Adversarial Networks
Generative adversarial networks (GANs) are unsupervised Deep Learning
approach in the computer vision community which has gained significant
attention from the last few years in identifying the internal structure of
multimodal medical imaging data. The adversarial network simultaneously
generates realistic medical images and corresponding annotations, which proven
to be useful in many cases such as image augmentation, image registration,
medical image generation, image reconstruction, and image-to-image translation.
These properties bring the attention of the researcher in the field of medical
image analysis and we are witness of rapid adaption in many novel and
traditional applications. This chapter provides state-of-the-art progress in
GANs-based clinical application in medical image generation, and cross-modality
synthesis. The various framework of GANs which gained popularity in the
interpretation of medical images, such as Deep Convolutional GAN (DCGAN),
Laplacian GAN (LAPGAN), pix2pix, CycleGAN, and unsupervised image-to-image
translation model (UNIT), continue to improve their performance by
incorporating additional hybrid architecture, has been discussed. Further, some
of the recent applications of these frameworks for image reconstruction, and
synthesis, and future research directions in the area have been covered.Comment: 19 pages, 3 figures, 5 table
An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection
The identification of lesion within medical image data is necessary for
diagnosis, treatment and prognosis. Segmentation and classification approaches
are mainly based on supervised learning with well-paired image-level or
voxel-level labels. However, labeling the lesion in medical images is laborious
requiring highly specialized knowledge. We propose a medical image synthesis
model named abnormal-to-normal translation generative adversarial network
(ANT-GAN) to generate a normal-looking medical image based on its
abnormal-looking counterpart without the need for paired training data. Unlike
typical GANs, whose aim is to generate realistic samples with variations, our
more restrictive model aims at producing a normal-looking image corresponding
to one containing lesions, and thus requires a special design. Being able to
provide a "normal" counterpart to a medical image can provide useful side
information for medical imaging tasks like lesion segmentation or
classification validated by our experiments. In the other aspect, the ANT-GAN
model is also capable of producing highly realistic lesion-containing image
corresponding to the healthy one, which shows the potential in data
augmentation verified in our experiments.Comment: 10 pages, 13 figure
Multiple Sclerosis Lesion Synthesis in MRI using an encoder-decoder U-NET
In this paper, we propose generating synthetic multiple sclerosis (MS)
lesions on MRI images with the final aim to improve the performance of
supervised machine learning algorithms, therefore avoiding the problem of the
lack of available ground truth. We propose a two-input two-output fully
convolutional neural network model for MS lesion synthesis in MRI images. The
lesion information is encoded as discrete binary intensity level masks passed
to the model and stacked with the input images. The model is trained end-to-end
without the need for manually annotating the lesions in the training set. We
then perform the generation of synthetic lesions on healthy images via
registration of patient images, which are subsequently used for data
augmentation to increase the performance for supervised MS lesion detection
algorithms. Our pipeline is evaluated on MS patient data from an in-house
clinical dataset and the public ISBI2015 challenge dataset. The evaluation is
based on measuring the similarities between the real and the synthetic images
as well as in terms of lesion detection performance by segmenting both the
original and synthetic images individually using a state-of-the-art
segmentation framework. We also demonstrate the usage of synthetic MS lesions
generated on healthy images as data augmentation. We analyze a scenario of
limited training data (one-image training) to demonstrate the effect of the
data augmentation on both datasets. Our results significantly show the
effectiveness of the usage of synthetic MS lesion images. For the ISBI2015
challenge, our one-image model trained using only a single image plus the
synthetic data augmentation strategy showed a performance similar to that of
other CNN methods that were fully trained using the entire training set,
yielding a comparable human expert rater performanc
Which Contrast Does Matter? Towards a Deep Understanding of MR Contrast using Collaborative GAN
Thanks to the recent success of generative adversarial network (GAN) for
image synthesis, there are many exciting GAN approaches that successfully
synthesize MR image contrast from other images with different contrasts. These
approaches are potentially important for image imputation problems, where
complete set of data is often difficult to obtain and image synthesis is one of
the key solutions for handling the missing data problem. Unfortunately, the
lack of the scalability of the existing GAN-based image translation approaches
poses a fundamental challenge to understand the nature of the MR contrast
imputation problem: which contrast does matter? Here, we present a systematic
approach using Collaborative Generative Adversarial Networks (CollaGAN), which
enable the learning of the joint image manifold of multiple MR contrasts to
investigate which contrasts are essential. Our experimental results showed that
the exogenous contrast from contrast agents is not replaceable, but other
endogenous contrast such as T1, T2, etc can be synthesized from other contrast.
These findings may give important guidance to the acquisition protocol design
for MR in real clinical environment.Comment: 32 pages, 6 figure
Deep Learning for Medical Image Analysis
This report describes my research activities in the Hasso Plattner Institute
and summarizes my Ph.D. plan and several novels, end-to-end trainable
approaches for analyzing medical images using deep learning algorithm. In this
report, as an example, we explore different novel methods based on deep
learning for brain abnormality detection, recognition, and segmentation. This
report prepared for the doctoral consortium in the AIME-2017 conference.Comment: Presented in doctoral consortium in the AIME-2017 conferenc
Red-GAN: Attacking class imbalance via conditioned generation. Yet another perspective on medical image synthesis for skin lesion dermoscopy and brain tumor MRI
Exploiting learning algorithms under scarce data regimes is a limitation and
a reality of the medical imaging field. In an attempt to mitigate the problem,
we propose a data augmentation protocol based on generative adversarial
networks. We condition the networks at a pixel-level (segmentation mask) and at
a global-level information (acquisition environment or lesion type). Such
conditioning provides immediate access to the image-label pairs while
controlling global class specific appearance of the synthesized images. To
stimulate synthesis of the features relevant for the segmentation task, an
additional passive player in a form of segmentor is introduced into the
adversarial game. We validate the approach on two medical datasets: BraTS,
ISIC. By controlling the class distribution through injection of synthetic
images into the training set we achieve control over the accuracy levels of the
datasets' classes
Generative Adversarial Training for MRA Image Synthesis Using Multi-Contrast MRI
Magnetic Resonance Angiography (MRA) has become an essential MR contrast for
imaging and evaluation of vascular anatomy and related diseases. MRA
acquisitions are typically ordered for vascular interventions, whereas in
typical scenarios, MRA sequences can be absent in the patient scans. This
motivates the need for a technique that generates inexistent MRA from existing
MR multi-contrast, which could be a valuable tool in retrospective subject
evaluations and imaging studies. In this paper, we present a generative
adversarial network (GAN) based technique to generate MRA from T1-weighted and
T2-weighted MRI images, for the first time to our knowledge. To better model
the representation of vessels which the MRA inherently highlights, we design a
loss term dedicated to a faithful reproduction of vascularities. To that end,
we incorporate steerable filter responses of the generated and reference images
inside a Huber function loss term. Extending the well- established
generator-discriminator architecture based on the recent PatchGAN model with
the addition of steerable filter loss, the proposed steerable GAN (sGAN) method
is evaluated on the large public database IXI. Experimental results show that
the sGAN outperforms the baseline GAN method in terms of an overlap score with
similar PSNR values, while it leads to improved visual perceptual quality
Generative Image Translation for Data Augmentation of Bone Lesion Pathology
Insufficient training data and severe class imbalance are often limiting
factors when developing machine learning models for the classification of rare
diseases. In this work, we address the problem of classifying bone lesions from
X-ray images by increasing the small number of positive samples in the training
set. We propose a generative data augmentation approach based on a
cycle-consistent generative adversarial network that synthesizes bone lesions
on images without pathology. We pose the generative task as an image-patch
translation problem that we optimize specifically for distinct bones (humerus,
tibia, femur). In experimental results, we confirm that the described method
mitigates the class imbalance problem in the binary classification task of bone
lesion detection. We show that the augmented training sets enable the training
of superior classifiers achieving better performance on a held-out test set.
Additionally, we demonstrate the feasibility of transfer learning and apply a
generative model that was trained on one body part to another
Missing MRI Pulse Sequence Synthesis using Multi-Modal Generative Adversarial Network
Magnetic resonance imaging (MRI) is being increasingly utilized to assess,
diagnose, and plan treatment for a variety of diseases. The ability to
visualize tissue in varied contrasts in the form of MR pulse sequences in a
single scan provides valuable insights to physicians, as well as enabling
automated systems performing downstream analysis. However many issues like
prohibitive scan time, image corruption, different acquisition protocols, or
allergies to certain contrast materials may hinder the process of acquiring
multiple sequences for a patient. This poses challenges to both physicians and
automated systems since complementary information provided by the missing
sequences is lost. In this paper, we propose a variant of generative
adversarial network (GAN) capable of leveraging redundant information contained
within multiple available sequences in order to generate one or more missing
sequences for a patient scan. The proposed network is designed as a
multi-input, multi-output network which combines information from all the
available pulse sequences, implicitly infers which sequences are missing, and
synthesizes the missing ones in a single forward pass. We demonstrate and
validate our method on two brain MRI datasets each with four sequences, and
show the applicability of the proposed method in simultaneously synthesizing
all missing sequences in any possible scenario where either one, two, or three
of the four sequences may be missing. We compare our approach with competing
unimodal and multi-modal methods, and show that we outperform both
quantitatively and qualitatively.Comment: Accepted for publication in IEEE Transactions on Medical Imagin
- …