188 research outputs found
DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction
Compressed Sensing Magnetic Resonance Imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging based fast MRI, which utilises multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training datasets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN) is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilise our U-Net based generator, which provides an endto-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CSMRI reconstruction methods and newly investigated deep learning approaches. Compared to these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
Recommended from our members
Deep De-Aliasing for Fast Compressive Sensing MRI
Fast Magnetic Resonance Imaging (MRI) is highly in demand for many clinical applications in order to reduce the scanning cost and improve the patient experience. This can also potentially increase the image quality by reducing the motion artefacts and contrast washout. However, once an image field of view and the desired resolution are chosen, the minimum scanning time is normally determined by the requirement of acquiring sufficient raw data to meet the Nyquist-Shannon sampling criteria. Compressive Sensing (CS) theory has been perfectly matched to the MRI scanning sequence design with much less required raw data for the image reconstruction. Inspired by recent advances in deep learning for solving various inverse problems, we propose a conditional Generative Adversarial Networks-based deep learning framework for de-aliasing and reconstructing MRI images from highly undersampled data with great promise to accelerate the data acquisition process. By coupling an innovative content loss with the adversarial loss our de-aliasing results are more realistic. Furthermore, we propose a refinement learning procedure for training the generator network, which can stabilise the training with fast convergence and less parameter tuning. We demonstrate that the proposed framework outperforms state-of-the-art CS-MRI methods, in terms of reconstruction error and perceptual image quality. In addition, our method can reconstruct each image in 0.22ms--0.37ms, which is promising for real-time applications
Hybrid Parallel Imaging and Compressed Sensing MRI Reconstruction with GRAPPA Integrated Multi-loss Supervised GAN
Objective: Parallel imaging accelerates the acquisition of magnetic resonance
imaging (MRI) data by acquiring additional sensitivity information with an
array of receiver coils resulting in reduced phase encoding steps. Compressed
sensing magnetic resonance imaging (CS-MRI) has achieved popularity in the
field of medical imaging because of its less data requirement than parallel
imaging. Parallel imaging and compressed sensing (CS) both speed up traditional
MRI acquisition by minimizing the amount of data captured in the k-space. As
acquisition time is inversely proportional to the number of samples, the
inverse formation of an image from reduced k-space samples leads to faster
acquisition but with aliasing artifacts. This paper proposes a novel Generative
Adversarial Network (GAN) namely RECGAN-GR supervised with multi-modal losses
for de-aliasing the reconstructed image. Methods: In contrast to existing GAN
networks, our proposed method introduces a novel generator network namely
RemU-Net integrated with dual-domain loss functions including weighted
magnitude and phase loss functions along with parallel imaging-based loss i.e.,
GRAPPA consistency loss. A k-space correction block is proposed as refinement
learning to make the GAN network self-resistant to generating unnecessary data
which drives the convergence of the reconstruction process faster. Results:
Comprehensive results show that the proposed RECGAN-GR achieves a 4 dB
improvement in the PSNR among the GAN-based methods and a 2 dB improvement
among conventional state-of-the-art CNN methods available in the literature.
Conclusion and significance: The proposed work contributes to significant
improvement in the image quality for low retained data leading to 5x or 10x
faster acquisition.Comment: 12 pages, 11 figure
- …