25 research outputs found
ScarGAN: Chained Generative Adversarial Networks to Simulate Pathological Tissue on Cardiovascular MR Scans
Medical images with specific pathologies are scarce, but a large amount of
data is usually required for a deep convolutional neural network (DCNN) to
achieve good accuracy. We consider the problem of segmenting the left
ventricular (LV) myocardium on late gadolinium enhancement (LGE) cardiovascular
magnetic resonance (CMR) scans of which only some of the scans have scar
tissue. We propose ScarGAN to simulate scar tissue on healthy myocardium using
chained generative adversarial networks (GAN). Our novel approach factorizes
the simulation process into 3 steps: 1) a mask generator to simulate the shape
of the scar tissue; 2) a domain-specific heuristic to produce the initial
simulated scar tissue from the simulated shape; 3) a refining generator to add
details to the simulated scar tissue. Unlike other approaches that generate
samples from scratch, we simulate scar tissue on normal scans resulting in
highly realistic samples. We show that experienced radiologists are unable to
distinguish between real and simulated scar tissue. Training a U-Net with
additional scans with scar tissue simulated by ScarGAN increases the percentage
of scar pixels correctly included in LV myocardium prediction from 75.9% to
80.5%.Comment: 12 pages, 5 figures. To appear in MICCAI DLMIA 201
'A net for everyone': fully personalized and unsupervised neural networks trained with longitudinal data from a single patient
With the rise in importance of personalized medicine, we trained personalized
neural networks to detect tumor progression in longitudinal datasets. The model
was evaluated on two datasets with a total of 64 scans from 32 patients
diagnosed with glioblastoma multiforme (GBM). Contrast-enhanced T1w sequences
of brain magnetic resonance imaging (MRI) images were used in this study. For
each patient, we trained their own neural network using just two images from
different timepoints. Our approach uses a Wasserstein-GAN (generative
adversarial network), an unsupervised network architecture, to map the
differences between the two images. Using this map, the change in tumor volume
can be evaluated. Due to the combination of data augmentation and the network
architecture, co-registration of the two images is not needed. Furthermore, we
do not rely on any additional training data, (manual) annotations or
pre-training neural networks. The model received an AUC-score of 0.87 for tumor
change. We also introduced a modified RANO criteria, for which an accuracy of
66% can be achieved. We show that using data from just one patient can be used
to train deep neural networks to monitor tumor change
Enhanced Magnetic Resonance Image Synthesis with Contrast-Aware Generative Adversarial Networks
A Magnetic Resonance Imaging (MRI) exam typically consists of the acquisition
of multiple MR pulse sequences, which are required for a reliable diagnosis.
Each sequence can be parameterized through multiple acquisition parameters
affecting MR image contrast, signal-to-noise ratio, resolution, or scan time.
With the rise of generative deep learning models, approaches for the synthesis
of MR images are developed to either synthesize additional MR contrasts,
generate synthetic data, or augment existing data for AI training. However,
current generative approaches for the synthesis of MR images are only trained
on images with a specific set of acquisition parameter values, limiting the
clinical value of these methods as various sets of acquisition parameter
settings are used in clinical practice. Therefore, we trained a generative
adversarial network (GAN) to generate synthetic MR knee images conditioned on
various acquisition parameters (repetition time, echo time, image orientation).
This approach enables us to synthesize MR images with adjustable image
contrast. In a visual Turing test, two experts mislabeled 40.5% of real and
synthetic MR images, demonstrating that the image quality of the generated
synthetic and real MR images is comparable. This work can support radiologists
and technologists during the parameterization of MR sequences by previewing the
yielded MR contrast, can serve as a valuable tool for radiology training, and
can be used for customized data generation to support AI training