2,339 research outputs found
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)
Positron emission tomography (PET) image synthesis plays an important role,
which can be used to boost the training data for computer aided diagnosis
systems. However, existing image synthesis methods have problems in
synthesizing the low resolution PET images. To address these limitations, we
propose multi-channel generative adversarial networks (M-GAN) based PET image
synthesis method. Different to the existing methods which rely on using
low-level features, the proposed M-GAN is capable to represent the features in
a high-level of semantic based on the adversarial learning concept. In
addition, M-GAN enables to take the input from the annotation (label) to
synthesize the high uptake regions e.g., tumors and from the computed
tomography (CT) images to constrain the appearance consistency and output the
synthetic PET images directly. Our results on 50 lung cancer PET-CT studies
indicate that our method was much closer to the real PET images when compared
with the existing methods.Comment: 9 pages, 2 figure
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
- …