2,389 research outputs found
Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction
Accelerating the data acquisition of dynamic magnetic resonance imaging (MRI)
leads to a challenging ill-posed inverse problem, which has received great
interest from both the signal processing and machine learning community over
the last decades. The key ingredient to the problem is how to exploit the
temporal correlation of the MR sequence to resolve the aliasing artefact.
Traditionally, such observation led to a formulation of a non-convex
optimisation problem, which were solved using iterative algorithms. Recently,
however, deep learning based-approaches have gained significant popularity due
to its ability to solve general inversion problems. In this work, we propose a
unique, novel convolutional recurrent neural network (CRNN) architecture which
reconstructs high quality cardiac MR images from highly undersampled k-space
data by jointly exploiting the dependencies of the temporal sequences as well
as the iterative nature of the traditional optimisation algorithms. In
particular, the proposed architecture embeds the structure of the traditional
iterative algorithms, efficiently modelling the recurrence of the iterative
reconstruction stages by using recurrent hidden connections over such
iterations. In addition, spatiotemporal dependencies are simultaneously learnt
by exploiting bidirectional recurrent hidden connections across time sequences.
The proposed algorithm is able to learn both the temporal dependency and the
iterative reconstruction process effectively with only a very small number of
parameters, while outperforming current MR reconstruction methods in terms of
computational complexity, reconstruction accuracy and speed.Comment: Published in IEEE Transactions on Medical Imagin
Synthesizing dynamic MRI using long-term recurrent convolutional networks
A method is proposed for converting raw ultrasound signals of respiratory
organ motion into high frame rate dynamic MRI using a long-term recurrent
convolutional neural network. Ultrasound signals were acquired using a
single-element transducer, referred to here as `organ-configuration motion'
(OCM) sensor, while sagittal MR images were simultaneously acquired. Both
streams of data were used for training a cascade of convolutional layers, to
extract relevant features from raw ultrasound, followed by a recurrent neural
network, to learn its temporal dynamics. The network was trained with MR images
on the output, and was employed to predict MR images at a temporal resolution
of 100 frames per second, based on ultrasound input alone, without any further
MR scanner input. The method was validated on 7 subjects.Comment: 8 pages, 3 figure
CRDN: Cascaded Residual Dense Networks for Dynamic MR Imaging with Edge-enhanced Loss Constraint
Dynamic magnetic resonance (MR) imaging has generated great research
interest, as it can provide both spatial and temporal information for clinical
diagnosis. However, slow imaging speed or long scanning time is still one of
the challenges for dynamic MR imaging. Most existing methods reconstruct
Dynamic MR images from incomplete k-space data under the guidance of compressed
sensing (CS) or low rank theory, which suffer from long iterative
reconstruction time. Recently, deep learning has shown great potential in
accelerating dynamic MR. Our previous work proposed a dynamic MR imaging method
with both k-space and spatial prior knowledge integrated via multi-supervised
network training. Nevertheless, there was still a certain degree of smooth in
the reconstructed images at high acceleration factors. In this work, we propose
cascaded residual dense networks for dynamic MR imaging with edge-enhance loss
constraint, dubbed as CRDN. Specifically, the cascaded residual dense networks
fully exploit the hierarchical features from all the convolutional layers with
both local and global feature fusion. We further utilize the total variation
(TV) loss function, which has the edge enhancement properties, for training the
networks
Analysis of Deep Complex-Valued Convolutional Neural Networks for MRI Reconstruction
Many real-world signal sources are complex-valued, having real and imaginary
components. However, the vast majority of existing deep learning platforms and
network architectures do not support the use of complex-valued data. MRI data
is inherently complex-valued, so existing approaches discard the richer
algebraic structure of the complex data. In this work, we investigate
end-to-end complex-valued convolutional neural networks - specifically, for
image reconstruction in lieu of two-channel real-valued networks. We apply this
to magnetic resonance imaging reconstruction for the purpose of accelerating
scan times and determine the performance of various promising complex-valued
activation functions. We find that complex-valued CNNs with complex-valued
convolutions provide superior reconstructions compared to real-valued
convolutions with the same number of trainable parameters, over a variety of
network architectures and datasets
DIMENSION: Dynamic MR Imaging with Both K-space and Spatial Prior Knowledge Obtained via Multi-Supervised Network Training
Dynamic MR image reconstruction from incomplete k-space data has generated
great research interest due to its capability in reducing scan time.
Nevertheless, the reconstruction problem is still challenging due to its
ill-posed nature. Most existing methods either suffer from long iterative
reconstruction time or explore limited prior knowledge. This paper proposes a
dynamic MR imaging method with both k-space and spatial prior knowledge
integrated via multi-supervised network training, dubbed as DIMENSION.
Specifically, the DIMENSION architecture consists of a frequential prior
network for updating the k-space with its network prediction and a spatial
prior network for capturing image structures and details. Furthermore, a
multisupervised network training technique is developed to constrain the
frequency domain information and reconstruction results at different levels.
The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the
state-of-the-art CNN-based method on in vivo datasets show our method can
achieve improved reconstruction results in shorter time.Comment: 11 pages, 12 figure
Recurrent Generative Adversarial Networks for Proximal Learning and Automated Compressive Image Recovery
Recovering images from undersampled linear measurements typically leads to an
ill-posed linear inverse problem, that asks for proper statistical priors.
Building effective priors is however challenged by the low train and test
overhead dictated by real-time tasks; and the need for retrieving visually
"plausible" and physically "feasible" images with minimal hallucination. To
cope with these challenges, we design a cascaded network architecture that
unrolls the proximal gradient iterations by permeating benefits from generative
residual networks (ResNet) to modeling the proximal operator. A mixture of
pixel-wise and perceptual costs is then deployed to train proximals. The
overall architecture resembles back-and-forth projection onto the intersection
of feasible and plausible images. Extensive computational experiments are
examined for a global task of reconstructing MR images of pediatric patients,
and a more local task of superresolving CelebA faces, that are insightful to
design efficient architectures. Our observations indicate that for MRI
reconstruction, a recurrent ResNet with a single residual block effectively
learns the proximal. This simple architecture appears to significantly
outperform the alternative deep ResNet architecture by 2dB SNR, and the
conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For
image superresolution, our preliminary results indicate that modeling the
denoising proximal demands deep ResNets.Comment: 11 pages, 11 figure
Visual Language Modeling on CNN Image Representations
Measuring the naturalness of images is important to generate realistic images
or to detect unnatural regions in images. Additionally, a method to measure
naturalness can be complementary to Convolutional Neural Network (CNN) based
features, which are known to be insensitive to the naturalness of images.
However, most probabilistic image models have insufficient capability of
modeling the complex and abstract naturalness that we feel because they are
built directly on raw image pixels. In this work, we assume that naturalness
can be measured by the predictability on high-level features during eye
movement. Based on this assumption, we propose a novel method to evaluate the
naturalness by building a variant of Recurrent Neural Network Language Models
on pre-trained CNN representations. Our method is applied to two tasks,
demonstrating that 1) using our method as a regularizer enables us to generate
more understandable images from image features than existing approaches, and 2)
unnaturalness maps produced by our method achieve state-of-the-art eye fixation
prediction performance on two well-studied datasets
Accelerating MR Imaging via Deep Chambolle-Pock Network
Compressed sensing (CS) has been introduced to accelerate data acquisition in
MR Imaging. However, CS-MRI methods suffer from detail loss with large
acceleration and complicated parameter selection. To address the limitations of
existing CS-MRI methods, a model-driven MR reconstruction is proposed that
trains a deep network, named CP-net, which is derived from the Chambolle-Pock
algorithm to reconstruct the in vivo MR images of human brains from highly
undersampled complex k-space data acquired on different types of MR scanners.
The proposed deep network can learn the proximal operator and parameters among
the Chambolle-Pock algorithm. All of the experiments show that the proposed
CP-net achieves more accurate MR reconstruction results, outperforming
state-of-the-art methods across various quantitative metrics.Comment: 4 pages, 5 figures, 1 table, Accepted at 2019 IEEE 41st Engineering
in Medicine and Biology Conference (EMBC 2019
Spatio-Temporal Deep Learning-Based Undersampling Artefact Reduction for 2D Radial Cine MRI with Limited Data
In this work we reduce undersampling artefacts in two-dimensional ()
golden-angle radial cine cardiac MRI by applying a modified version of the
U-net. We train the network on spatio-temporal slices which are previously
extracted from the image sequences. We compare our approach to two and a
Deep Learning-based post processing methods and to three iterative
reconstruction methods for dynamic cardiac MRI. Our method outperforms the
spatially trained U-net and the spatio-temporal U-net. Compared to the
spatio-temporal U-net, our method delivers comparable results, but with
shorter training times and less training data. Compared to the Compressed
Sensing-based methods -FOCUSS and a total variation regularised
reconstruction approach, our method improves image quality with respect to all
reported metrics. Further, it achieves competitive results when compared to an
iterative reconstruction method based on adaptive regularization with
Dictionary Learning and total variation, while only requiring a small fraction
of the computational time. A persistent homology analysis demonstrates that the
data manifold of the spatio-temporal domain has a lower complexity than the
spatial domain and therefore, the learning of a projection-like mapping is
facilitated. Even when trained on only one single subject without
data-augmentation, our approach yields results which are similar to the ones
obtained on a large training dataset. This makes the method particularly
suitable for training a network on limited training data. Finally, in contrast
to the spatial U-net, our proposed method is shown to be naturally robust
with respect to image rotation in image space and almost achieves
rotation-equivariance where neither data-augmentation nor a particular network
design are required.Comment: To be published in IEEE Transactions on Medical Imagin
RARE: Image Reconstruction using Deep Priors Learned without Ground Truth
Regularization by denoising (RED) is an image reconstruction framework that
uses an image denoiser as a prior. Recent work has shown the state-of-the-art
performance of RED with learned denoisers corresponding to pre-trained
convolutional neural nets (CNNs). In this work, we propose to broaden the
current denoiser-centric view of RED by considering priors corresponding to
networks trained for more general artifact-removal. The key benefit of the
proposed family of algorithms, called regularization by artifact-removal
(RARE), is that it can leverage priors learned on datasets containing only
undersampled measurements. This makes RARE applicable to problems where it is
practically impossible to have fully-sampled groundtruth data for training. We
validate RARE on both simulated and experimentally collected data by
reconstructing a free-breathing whole-body 3D MRIs into ten respiratory phases
from heavily undersampled k-space measurements. Our results corroborate the
potential of learning regularizers for iterative inversion directly on
undersampled and noisy measurements.Comment: In press for IEEE Journal of Special Topics in Signal Processin
- …