154 research outputs found
Neural Architecture Search for Compressed Sensing Magnetic Resonance Image Reconstruction
Recent works have demonstrated that deep learning (DL) based compressed
sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by
reconstructing MR images from sub-sampled k-space data. However, network
architectures adopted in previous methods are all designed by handcraft. Neural
Architecture Search (NAS) algorithms can automatically build neural network
architectures which have outperformed human designed ones in several vision
tasks. Inspired by this, here we proposed a novel and efficient network for the
MR image reconstruction problem via NAS instead of manual attempts.
Particularly, a specific cell structure, which was integrated into the
model-driven MR reconstruction pipeline, was automatically searched from a
flexible pre-defined operation search space in a differentiable manner.
Experimental results show that our searched network can produce better
reconstruction results compared to previous state-of-the-art methods in terms
of PSNR and SSIM with 4-6 times fewer computation resources. Extensive
experiments were conducted to analyze how hyper-parameters affect
reconstruction performance and the searched structures. The generalizability of
the searched architecture was also evaluated on different organ MR datasets.
Our proposed method can reach a better trade-off between computation cost and
reconstruction performance for MR reconstruction problem with good
generalizability and offer insights to design neural networks for other medical
image applications. The evaluation code will be available at
https://github.com/yjump/NAS-for-CSMRI.Comment: To be appear in Computerized Medical Imaging and Graphic
SDLFormer: A Sparse and Dense Locality-enhanced Transformer for Accelerated MR Image Reconstruction
Transformers have emerged as viable alternatives to convolutional neural
networks owing to their ability to learn non-local region relationships in the
spatial domain. The self-attention mechanism of the transformer enables
transformers to capture long-range dependencies in the images, which might be
desirable for accelerated MRI image reconstruction as the effect of
undersampling is non-local in the image domain. Despite its computational
efficiency, the window-based transformers suffer from restricted receptive
fields as the dependencies are limited to within the scope of the image
windows. We propose a window-based transformer network that integrates dilated
attention mechanism and convolution for accelerated MRI image reconstruction.
The proposed network consists of dilated and dense neighborhood attention
transformers to enhance the distant neighborhood pixel relationship and
introduce depth-wise convolutions within the transformer module to learn
low-level translation invariant features for accelerated MRI image
reconstruction. The proposed model is trained in a self-supervised manner. We
perform extensive experiments for multi-coil MRI acceleration for coronal PD,
coronal PDFS and axial T2 contrasts with 4x and 5x under-sampling in
self-supervised learning based on k-space splitting. We compare our method
against other reconstruction architectures and the parallel domain
self-supervised learning baseline. Results show that the proposed model
exhibits improvement margins of (i) around 1.40 dB in PSNR and around 0.028 in
SSIM on average over other architectures (ii) around 1.44 dB in PSNR and around
0.029 in SSIM over parallel domain self-supervised learning. The code is
available at https://github.com/rahul-gs-16/sdlformer.gitComment: Accepted at MICCAI workshop MILLanD 2023 Medical Image Learning with
noisy and Limited Dat
Dual-Octave Convolution for Accelerated Parallel MR Image Reconstruction
Magnetic resonance (MR) image acquisition is an inherently prolonged process,
whose acceleration by obtaining multiple undersampled images simultaneously
through parallel imaging has always been the subject of research. In this
paper, we propose the Dual-Octave Convolution (Dual-OctConv), which is capable
of learning multi-scale spatial-frequency features from both real and imaginary
components, for fast parallel MR image reconstruction. By reformulating the
complex operations using octave convolutions, our model shows a strong ability
to capture richer representations of MR images, while at the same time greatly
reducing the spatial redundancy. More specifically, the input feature maps and
convolutional kernels are first split into two components (i.e., real and
imaginary), which are then divided into four groups according to their spatial
frequencies. Then, our Dual-OctConv conducts intra-group information updating
and inter-group information exchange to aggregate the contextual information
across different groups. Our framework provides two appealing benefits: (i) it
encourages interactions between real and imaginary components at various
spatial frequencies to achieve richer representational capacity, and (ii) it
enlarges the receptive field by learning multiple spatial-frequency features of
both the real and imaginary components. We evaluate the performance of the
proposed model on the acceleration of multi-coil MR image reconstruction.
Extensive experiments are conducted on an {in vivo} knee dataset under
different undersampling patterns and acceleration factors. The experimental
results demonstrate the superiority of our model in accelerated parallel MR
image reconstruction. Our code is available at:
github.com/chunmeifeng/Dual-OctConv.Comment: Proceedings of the 35th AAAI Conference on Artificial Intelligence
(AAAI) 202
- …