9 research outputs found
Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs
Motion artifacts are a primary source of magnetic resonance (MR) image
quality deterioration with strong repercussions on diagnostic performance.
Currently, MR motion correction is carried out either prospectively, with the
help of motion tracking systems, or retrospectively by mainly utilizing
computationally expensive iterative algorithms. In this paper, we utilize a new
adversarial framework, titled MedGAN, for the joint retrospective correction of
rigid and non-rigid motion artifacts in different body regions and without the
need for a reference image. MedGAN utilizes a unique combination of
non-adversarial losses and a new generator architecture to capture the textures
and fine-detailed structures of the desired artifact-free MR images.
Quantitative and qualitative comparisons with other adversarial techniques have
illustrated the proposed model performance.Comment: 5 pages, 2 figures, under review for the IEEE International Symposium
for Biomedical Image
An Adversarial Super-Resolution Remedy for Radar Design Trade-offs
Radar is of vital importance in many fields, such as autonomous driving,
safety and surveillance applications. However, it suffers from stringent
constraints on its design parametrization leading to multiple trade-offs. For
example, the bandwidth in FMCW radars is inversely proportional with both the
maximum unambiguous range and range resolution. In this work, we introduce a
new method for circumventing radar design trade-offs. We propose the use of
recent advances in computer vision, more specifically generative adversarial
networks (GANs), to enhance low-resolution radar acquisitions into higher
resolution counterparts while maintaining the advantages of the low-resolution
parametrization. The capability of the proposed method was evaluated on the
velocity resolution and range-azimuth trade-offs in micro-Doppler signatures
and FMCW uniform linear array (ULA) radars, respectively.Comment: Accepted in EUSIPCO 2019, 5 page
ipA-MedGAN: Inpainting of Arbitrary Regions in Medical Imaging
Local deformations in medical modalities are common phenomena due to a
multitude of factors such as metallic implants or limited field of views in
magnetic resonance imaging (MRI). Completion of the missing or distorted
regions is of special interest for automatic image analysis frameworks to
enhance post-processing tasks such as segmentation or classification. In this
work, we propose a new generative framework for medical image inpainting,
titled ipA-MedGAN. It bypasses the limitations of previous frameworks by
enabling inpainting of arbitrary shaped regions without a prior localization of
the regions of interest. Thorough qualitative and quantitative comparisons with
other inpainting and translational approaches have illustrated the superior
performance of the proposed framework for the task of brain MR inpainting.Comment: Submitted to IEEE ICIP 202
Unsupervised Medical Image Translation Using Cycle-MedGAN
Image-to-image translation is a new field in computer vision with multiple
potential applications in the medical domain. However, for supervised image
translation frameworks, co-registered datasets, paired in a pixel-wise sense,
are required. This is often difficult to acquire in realistic medical
scenarios. On the other hand, unsupervised translation frameworks often result
in blurred translated images with unrealistic details. In this work, we propose
a new unsupervised translation framework which is titled Cycle-MedGAN. The
proposed framework utilizes new non-adversarial cycle losses which direct the
framework to minimize the textural and perceptual discrepancies in the
translated images. Qualitative and quantitative comparisons against other
unsupervised translation approaches demonstrate the performance of the proposed
framework for PET-CT translation and MR motion correction.Comment: Submitted to EUSIPCO 2019, 5 page
FA-GAN: fused attentive generative adversarial networks for MRI image super-resolution
High-resolution magnetic resonance images can provide fine-grained anatomical information, but acquiring such data requires a long scanning time. In this paper, a framework called the Fused Attentive Generative Adversarial Networks(FA-GAN) is proposed to generate the super- resolution MR image from low-resolution magnetic resonance images, which can reduce the scanning time effectively but with high resolution MR images. In the framework of the FA-GAN, the local fusion feature block, consisting of different three-pass networks by using different convolution kernels, is proposed to extract image features at different scales. And the global feature fusion module, including the channel attention module, the self-attention module, and the fusion operation,is designed to enhance the important features of the MR image. Moreover, the spectral normalization process is introduced to make the discriminator network stable. 40 sets of 3D magnetic resonance images (each set of images contains 256 slices) are used to train the network, and 10 sets of images are used to test the proposed method. The experimental results show that the PSNR and SSIM values of the super-resolution magnetic resonance image generated by the proposed FA-GAN method are higher than the state-of-the-art reconstruction methods