667 research outputs found
Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information
In clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing
SFT-KD-Recon: Learning a Student-friendly Teacher for Knowledge Distillation in Magnetic Resonance Image Reconstruction
Deep cascaded architectures for magnetic resonance imaging (MRI) acceleration
have shown remarkable success in providing high-quality reconstruction.
However, as the number of cascades increases, the improvements in
reconstruction tend to become marginal, indicating possible excess model
capacity. Knowledge distillation (KD) is an emerging technique to compress
these models, in which a trained deep teacher network is used to distill
knowledge to a smaller student network such that the student learns to mimic
the behavior of the teacher. Most KD methods focus on effectively training the
student with a pre-trained teacher unaware of the student model. We propose
SFT-KD-Recon, a student-friendly teacher training approach along with the
student as a prior step to KD to make the teacher aware of the structure and
capacity of the student and enable aligning the representations of the teacher
with the student. In SFT, the teacher is jointly trained with the unfolded
branch configurations of the student blocks using three loss terms -
teacher-reconstruction loss, student-reconstruction loss, and teacher-student
imitation loss, followed by KD of the student. We perform extensive experiments
for MRI acceleration in 4x and 5x under-sampling on the brain and cardiac
datasets on five KD methods using the proposed approach as a prior step. We
consider the DC-CNN architecture and setup teacher as D5C5 (141765 parameters),
and student as D3C5 (49285 parameters), denoting a compression of 2.87:1.
Results show that (i) our approach consistently improves the KD methods with
improved reconstruction performance and image quality, and (ii) the student
distilled using our approach is competitive with the teacher, with the
performance gap reduced from 0.53 dB to 0.03 dB.Comment: 18 pages, 8 figures. Accepted for publication at MIDL 2023. Code for
our proposed method is available at
https://github.com/GayathriMatcha/SFT-KD-Reco
Swin transformer for fast MRI
Magnetic resonance imaging (MRI) is an important non-invasive clinical tool that can produce high-resolution and reproducible images. However, a long scanning time is required for high-quality MR images, which leads to exhaustion and discomfort of patients, inducing more artefacts due to voluntary movements of the patients and involuntary physiological movements. To accelerate the scanning process, methods by k-space undersampling and deep learning based reconstruction have been popularised. This work introduced SwinMR, a novel Swin transformer based method for fast MRI reconstruction. The whole network consisted of an input module (IM), a feature extraction module (FEM) and an output module (OM). The IM and OM were 2D convolutional layers and the FEM was composed of a cascaded of residual Swin transformer blocks (RSTBs) and 2D convolutional layers. The RSTB consisted of a series of Swin transformer layers (STLs). The shifted windows multi-head self-attention (W-MSA/SW-MSA) of STL was performed in shifted windows rather than the multi-head self-attention (MSA) of the original transformer in the whole image space. A novel multi-channel loss was proposed by using the sensitivity maps, which was proved to reserve more textures and details. We performed a series of comparative studies and ablation studies in the Calgary-Campinas public brain MR dataset and conducted a downstream segmentation experiment in the Multi-modal Brain Tumour Segmentation Challenge 2017 dataset. The results demonstrate our SwinMR achieved high-quality reconstruction compared with other benchmark methods, and it shows great robustness with different undersampling masks, under noise interruption and on different datasets. The code is publicly available at https://github.com/ayanglab/SwinMR.This work was supported in part by the UK Research and Inno-
vation Future Leaders Fellowship [MR/V023799/1], in part by the
Medical Research Council [MC/PC/21013], in part by the European
Research Council Innovative Medicines Initiative [DRAGON, H2020-JTI-IMI2 101005122], in part by the AI for Health Imaging
Award [CHAIMELEON, H2020-SC1-FA-DTS-2019-1 952172], in part
by the British Heart Foundation [Project Number: TG/18/5/34111,
PG/16/78/32402], in part by the NVIDIA Academic Hardware Grant
Program, in part by the Project of Shenzhen International Cooper-
ation Foundation [GJHZ20180926165402083], in part by the Bas-
que Government through the ELKARTEK funding program [KK-
2020/00049], and in part by the consolidated research group
MATHMODE [IT1294-19
- …