710 research outputs found
Adversarial Inpainting of Medical Image Modalities
Numerous factors could lead to partial deteriorations of medical images. For
example, metallic implants will lead to localized perturbations in MRI scans.
This will affect further post-processing tasks such as attenuation correction
in PET/MRI or radiation therapy planning. In this work, we propose the
inpainting of medical images via Generative Adversarial Networks (GANs). The
proposed framework incorporates two patch-based discriminator networks with
additional style and perceptual losses for the inpainting of missing
information in realistically detailed and contextually consistent manner. The
proposed framework outperformed other natural image inpainting techniques both
qualitatively and quantitatively on two different medical modalities.Comment: To be submitted to ICASSP 201
Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs
Motion artifacts are a primary source of magnetic resonance (MR) image
quality deterioration with strong repercussions on diagnostic performance.
Currently, MR motion correction is carried out either prospectively, with the
help of motion tracking systems, or retrospectively by mainly utilizing
computationally expensive iterative algorithms. In this paper, we utilize a new
adversarial framework, titled MedGAN, for the joint retrospective correction of
rigid and non-rigid motion artifacts in different body regions and without the
need for a reference image. MedGAN utilizes a unique combination of
non-adversarial losses and a new generator architecture to capture the textures
and fine-detailed structures of the desired artifact-free MR images.
Quantitative and qualitative comparisons with other adversarial techniques have
illustrated the proposed model performance.Comment: 5 pages, 2 figures, under review for the IEEE International Symposium
for Biomedical Image
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
Shaner leaves legacy of leadership, strong teaching
Radar range: standard. Max rain level: moderate rain
ipA-MedGAN: Inpainting of Arbitrary Regions in Medical Imaging
Local deformations in medical modalities are common phenomena due to a
multitude of factors such as metallic implants or limited field of views in
magnetic resonance imaging (MRI). Completion of the missing or distorted
regions is of special interest for automatic image analysis frameworks to
enhance post-processing tasks such as segmentation or classification. In this
work, we propose a new generative framework for medical image inpainting,
titled ipA-MedGAN. It bypasses the limitations of previous frameworks by
enabling inpainting of arbitrary shaped regions without a prior localization of
the regions of interest. Thorough qualitative and quantitative comparisons with
other inpainting and translational approaches have illustrated the superior
performance of the proposed framework for the task of brain MR inpainting.Comment: Submitted to IEEE ICIP 202
- …