1,825 research outputs found
ADMM-Net: A Deep Learning Approach for Compressive Sensing MRI
Compressive sensing (CS) is an effective approach for fast Magnetic Resonance
Imaging (MRI). It aims at reconstructing MR images from a small number of
under-sampled data in k-space, and accelerating the data acquisition in MRI. To
improve the current MRI system in reconstruction accuracy and speed, in this
paper, we propose two novel deep architectures, dubbed ADMM-Nets in basic and
generalized versions. ADMM-Nets are defined over data flow graphs, which are
derived from the iterative procedures in Alternating Direction Method of
Multipliers (ADMM) algorithm for optimizing a general CS-based MRI model. They
take the sampled k-space data as inputs and output reconstructed MR images.
Moreover, we extend our network to cope with complex-valued MR images. In the
training phase, all parameters of the nets, e.g., transforms, shrinkage
functions, etc., are discriminatively trained end-to-end. In the testing phase,
they have computational overhead similar to ADMM algorithm but use optimized
parameters learned from the data for CS-based reconstruction task. We
investigate different configurations in network structures and conduct
extensive experiments on MR image reconstruction under different sampling
rates. Due to the combination of the advantages in model-based approach and
deep learning approach, the ADMM-Nets achieve state-of-the-art reconstruction
accuracies with fast computational speed
Deep Learning with Domain Adaptation for Accelerated Projection-Reconstruction MR
Purpose: The radial k-space trajectory is a well-established sampling
trajectory used in conjunction with magnetic resonance imaging. However, the
radial k-space trajectory requires a large number of radial lines for
high-resolution reconstruction. Increasing the number of radial lines causes
longer acquisition time, making it more difficult for routine clinical use. On
the other hand, if we reduce the number of radial lines, streaking artifact
patterns are unavoidable. To solve this problem, we propose a novel deep
learning approach with domain adaptation to restore high-resolution MR images
from under-sampled k-space data.
Methods: The proposed deep network removes the streaking artifacts from the
artifact corrupted images. To address the situation given the limited available
data, we propose a domain adaptation scheme that employs a pre-trained network
using a large number of x-ray computed tomography (CT) or synthesized radial MR
datasets, which is then fine-tuned with only a few radial MR datasets.
Results: The proposed method outperforms existing compressed sensing
algorithms, such as the total variation and PR-FOCUSS methods. In addition, the
calculation time is several orders of magnitude faster than the total variation
and PR-FOCUSS methods.Moreover, we found that pre-training using CT or MR data
from similar organ data is more important than pre-training using data from the
same modality for different organ.
Conclusion: We demonstrate the possibility of a domain-adaptation when only a
limited amount of MR data is available. The proposed method surpasses the
existing compressed sensing algorithms in terms of the image quality and
computation time.Comment: This paper has been accepted and will soon appear in Magnetic
Resonance in Medicin
Super-resolution MRI through Deep Learning
Magnetic resonance imaging (MRI) is extensively used for diagnosis and
image-guided therapeutics. Due to hardware, physical and physiological
limitations, acquisition of high-resolution MRI data takes long scan time at
high system cost, and could be limited to low spatial coverage and also subject
to motion artifacts. Super-resolution MRI can be achieved with deep learning,
which is a promising approach and has a great potential for preclinical and
clinical imaging. Compared with polynomial interpolation or sparse-coding
algorithms, deep learning extracts prior knowledge from big data and produces
superior MRI images from a low-resolution counterpart. In this paper, we adapt
two state-of-the-art neural network models for CT denoising and deblurring,
transfer them for super-resolution MRI, and demonstrate encouraging
super-resolution MRI results toward two-fold resolution enhancement
Deep Embedding Convolutional Neural Network for Synthesizing CT Image from T1-Weighted MR Image
Recently, more and more attention is drawn to the field of medical image
synthesis across modalities. Among them, the synthesis of computed tomography
(CT) image from T1-weighted magnetic resonance (MR) image is of great
importance, although the mapping between them is highly complex due to large
gaps of appearances of the two modalities. In this work, we aim to tackle this
MR-to-CT synthesis by a novel deep embedding convolutional neural network
(DECNN). Specifically, we generate the feature maps from MR images, and then
transform these feature maps forward through convolutional layers in the
network. We can further compute a tentative CT synthesis from the midway of the
flow of feature maps, and then embed this tentative CT synthesis back to the
feature maps. This embedding operation results in better feature maps, which
are further transformed forward in DECNN. After repeat-ing this embedding
procedure for several times in the network, we can eventually synthesize a
final CT image in the end of the DECNN. We have validated our proposed method
on both brain and prostate datasets, by also compar-ing with the
state-of-the-art methods. Experimental results suggest that our DECNN (with
repeated embedding op-erations) demonstrates its superior performances, in
terms of both the perceptive quality of the synthesized CT image and the
run-time cost for synthesizing a CT image
Deep artifact learning for compressed sensing and parallel MRI
Purpose: Compressed sensing MRI (CS-MRI) from single and parallel coils is
one of the powerful ways to reduce the scan time of MR imaging with performance
guarantee. However, the computational costs are usually expensive. This paper
aims to propose a computationally fast and accurate deep learning algorithm for
the reconstruction of MR images from highly down-sampled k-space data.
Theory: Based on the topological analysis, we show that the data manifold of
the aliasing artifact is easier to learn from a uniform subsampling pattern
with additional low-frequency k-space data. Thus, we develop deep aliasing
artifact learning networks for the magnitude and phase images to estimate and
remove the aliasing artifacts from highly accelerated MR acquisition.
Methods: The aliasing artifacts are directly estimated from the distorted
magnitude and phase images reconstructed from subsampled k-space data so that
we can get an aliasing-free images by subtracting the estimated aliasing
artifact from corrupted inputs. Moreover, to deal with the globally distributed
aliasing artifact, we develop a multi-scale deep neural network with a large
receptive field.
Results: The experimental results confirm that the proposed deep artifact
learning network effectively estimates and removes the aliasing artifacts.
Compared to existing CS methods from single and multi-coli data, the proposed
network shows minimal errors by removing the coherent aliasing artifacts.
Furthermore, the computational time is by order of magnitude faster.
Conclusion: As the proposed deep artifact learning network immediately
generates accurate reconstruction, it has great potential for clinical
applications
Overview of image-to-image translation by use of deep neural networks: denoising, super-resolution, modality conversion, and reconstruction in medical imaging
Since the advent of deep convolutional neural networks (DNNs), computer
vision has seen an extremely rapid progress that has led to huge advances in
medical imaging. This article does not aim to cover all aspects of the field
but focuses on a particular topic, image-to-image translation. Although the
topic may not sound familiar, it turns out that many seemingly irrelevant
applications can be understood as instances of image-to-image translation. Such
applications include (1) noise reduction, (2) super-resolution, (3) image
synthesis, and (4) reconstruction. The same underlying principles and
algorithms work for various tasks. Our aim is to introduce some of the key
ideas on this topic from a uniform point of view. We introduce core ideas and
jargon that are specific to image processing by use of DNNs. Having an
intuitive grasp of the core ideas of and a knowledge of technical terms would
be of great help to the reader for understanding the existing and future
applications. Most of the recent applications which build on image-to-image
translation are based on one of two fundamental architectures, called pix2pix
and CycleGAN, depending on whether the available training data are paired or
unpaired. We provide computer codes which implement these two architectures
with various enhancements. Our codes are available online with use of the very
permissive MIT license. We provide a hands-on tutorial for training a model for
denoising based on our codes. We hope that this article, together with the
codes, will provide both an overview and the details of the key algorithms, and
that it will serve as a basis for the development of new applications.Comment: many typos are fixed. to appear in Radiological Physics and
Technolog
200x Low-dose PET Reconstruction using Deep Learning
Positron emission tomography (PET) is widely used in various clinical
applications, including cancer diagnosis, heart disease and neuro disorders.
The use of radioactive tracer in PET imaging raises concerns due to the risk of
radiation exposure. To minimize this potential risk in PET imaging, efforts
have been made to reduce the amount of radio-tracer usage. However, lowing dose
results in low Signal-to-Noise-Ratio (SNR) and loss of information, both of
which will heavily affect clinical diagnosis. Besides, the ill-conditioning of
low-dose PET image reconstruction makes it a difficult problem for iterative
reconstruction algorithms. Previous methods proposed are typically complicated
and slow, yet still cannot yield satisfactory results at significantly low
dose. Here, we propose a deep learning method to resolve this issue with an
encoder-decoder residual deep network with concatenate skip connections.
Experiments shows the proposed method can reconstruct low-dose PET image to a
standard-dose quality with only two-hundredth dose. Different cost functions
for training model are explored. Multi-slice input strategy is introduced to
provide the network with more structural information and make it more robust to
noise. Evaluation on ultra-low-dose clinical data shows that the proposed
method can achieve better result than the state-of-the-art methods and
reconstruct images with comparable quality using only 0.5% of the original
regular dose
Learning to Decode 7T-like MR Image Reconstruction from 3T MR Images
Increasing demand for high field magnetic resonance (MR) scanner indicates
the need for high-quality MR images for accurate medical diagnosis. However,
cost constraints, instead, motivate a need for algorithms to enhance images
from low field scanners. We propose an approach to process the given low field
(3T) MR image slices to reconstruct the corresponding high field (7T-like)
slices. Our framework involves a novel architecture of a merged convolutional
autoencoder with a single encoder and multiple decoders. Specifically, we
employ three decoders with random initializations, and the proposed training
approach involves selection of a particular decoder in each weight-update
iteration for back propagation. We demonstrate that the proposed algorithm
outperforms some related contemporary methods in terms of performance and
reconstruction time
Denoising of 3-D Magnetic Resonance Images Using a Residual Encoder-Decoder Wasserstein Generative Adversarial Network
Structure-preserved denoising of 3D magnetic resonance imaging (MRI) images
is a critical step in medical image analysis. Over the past few years, many
algorithms with impressive performances have been proposed. In this paper,
inspired by the idea of deep learning, we introduce an MRI denoising method
based on the residual encoder-decoder Wasserstein generative adversarial
network (RED-WGAN). Specifically, to explore the structure similarity between
neighboring slices, a 3D configuration is utilized as the basic processing
unit. Residual autoencoders combined with deconvolution operations are
introduced into the generator network. Furthermore, to alleviate the
oversmoothing shortcoming of the traditional mean squared error (MSE) loss
function, the perceptual similarity, which is implemented by calculating the
distances in the feature space extracted by a pretrained VGG-19 network, is
incorporated with the MSE and adversarial losses to form the new loss function.
Extensive experiments are implemented to assess the performance of the proposed
method. The experimental results show that the proposed RED-WGAN achieves
performance superior to several state-of-the-art methods in both simulated and
real clinical data. In particular, our method demonstrates powerful abilities
in both noise suppression and structure preservation.Comment: To appear on Medical Image Analysis. 29 pages, 15 figures, 7 table
Channel Splitting Network for Single MR Image Super-Resolution
High resolution magnetic resonance (MR) imaging is desirable in many clinical
applications due to its contribution to more accurate subsequent analyses and
early clinical diagnoses. Single image super resolution (SISR) is an effective
and cost efficient alternative technique to improve the spatial resolution of
MR images. In the past few years, SISR methods based on deep learning
techniques, especially convolutional neural networks (CNNs), have achieved
state-of-the-art performance on natural images. However, the information is
gradually weakened and training becomes increasingly difficult as the network
deepens. The problem is more serious for medical images because lacking high
quality and effective training samples makes deep models prone to underfitting
or overfitting. Nevertheless, many current models treat the hierarchical
features on different channels equivalently, which is not helpful for the
models to deal with the hierarchical features discriminatively and targetedly.
To this end, we present a novel channel splitting network (CSN) to ease the
representational burden of deep models. The proposed CSN model divides the
hierarchical features into two branches, i.e., residual branch and dense
branch, with different information transmissions. The residual branch is able
to promote feature reuse, while the dense branch is beneficial to the
exploration of new features. Besides, we also adopt the merge-and-run mapping
to facilitate information integration between different branches. Extensive
experiments on various MR images, including proton density (PD), T1 and T2
images, show that the proposed CSN model achieves superior performance over
other state-of-the-art SISR methods.Comment: 13 pages, 11 figures and 4 table
- …