22,579 research outputs found
Data Consistent CT Reconstruction from Insufficient Data with Learned Prior Images
Image reconstruction from insufficient data is common in computed tomography
(CT), e.g., image reconstruction from truncated data, limited-angle data and
sparse-view data. Deep learning has achieved impressive results in this field.
However, the robustness of deep learning methods is still a concern for
clinical applications due to the following two challenges: a) With limited
access to sufficient training data, a learned deep learning model may not
generalize well to unseen data; b) Deep learning models are sensitive to noise.
Therefore, the quality of images processed by neural networks only may be
inadequate. In this work, we investigate the robustness of deep learning in CT
image reconstruction by showing false negative and false positive lesion cases.
Since learning-based images with incorrect structures are likely not consistent
with measured projection data, we propose a data consistent reconstruction
(DCR) method to improve their image quality, which combines the advantages of
compressed sensing and deep learning: First, a prior image is generated by deep
learning. Afterwards, unmeasured projection data are inpainted by forward
projection of the prior image. Finally, iterative reconstruction with
reweighted total variation regularization is applied, integrating data
consistency for measured data and learned prior information for missing data.
The efficacy of the proposed method is demonstrated in cone-beam CT with
truncated data, limited-angle data and sparse-view data, respectively. For
example, for truncated data, DCR achieves a mean root-mean-square error of 24
HU and a mean structure similarity index of 0.999 inside the field-of-view for
different patients in the noisy case, while the state-of-the-art U-Net method
achieves 55 HU and 0.995 respectively for these two metrics.Comment: 10 pages, 9 figure
Computationally Efficient Deep Neural Network for Computed Tomography Image Reconstruction
Deep-neural-network-based image reconstruction has demonstrated promising
performance in medical imaging for under-sampled and low-dose scenarios.
However, it requires large amount of memory and extensive time for the
training. It is especially challenging to train the reconstruction networks for
three-dimensional computed tomography (CT) because of the high resolution of CT
images. The purpose of this work is to reduce the memory and time consumption
of the training of the reconstruction networks for CT to make it practical for
current hardware, while maintaining the quality of the reconstructed images.
We unrolled the proximal gradient descent algorithm for iterative image
reconstruction to finite iterations and replaced the terms related to the
penalty function with trainable convolutional neural networks (CNN). The
network was trained greedily iteration by iteration in the image-domain on
patches, which requires reasonable amount of memory and time on mainstream
graphics processing unit (GPU). To overcome the local-minimum problem caused by
greedy learning, we used deep UNet as the CNN and incorporated separable
quadratic surrogate with ordered subsets for data fidelity, so that the
solution could escape from easy local minimums and achieve better image
quality.
The proposed method achieved comparable image quality with state-of-the-art
neural network for CT image reconstruction on 2D sparse-view and limited-angle
problems on the low-dose CT challenge dataset.Comment: 33 pages, 14 figures, accepted by Medical Physic
Deriving Neural Network Architectures using Precision Learning: Parallel-to-fan beam Conversion
In this paper, we derive a neural network architecture based on an analytical
formulation of the parallel-to-fan beam conversion problem following the
concept of precision learning. The network allows to learn the unknown
operators in this conversion in a data-driven manner avoiding interpolation and
potential loss of resolution. Integration of known operators results in a small
number of trainable parameters that can be estimated from synthetic data only.
The concept is evaluated in the context of Hybrid MRI/X-ray imaging where
transformation of the parallel-beam MRI projections to fan-beam X-ray
projections is required. The proposed method is compared to a traditional
rebinning method. The results demonstrate that the proposed method is superior
to ray-by-ray interpolation and is able to deliver sharper images using the
same amount of parallel-beam input projections which is crucial for
interventional applications. We believe that this approach forms a basis for
further work uniting deep learning, signal processing, physics, and traditional
pattern recognition.Comment: Inproceedings GCPR 201
Generative Adversarial Network in Medical Imaging: A Review
Generative adversarial networks have gained a lot of attention in the
computer vision community due to their capability of data generation without
explicitly modelling the probability density function. The adversarial loss
brought by the discriminator provides a clever way of incorporating unlabeled
samples into training and imposing higher order consistency. This has proven to
be useful in many cases, such as domain adaptation, data augmentation, and
image-to-image translation. These properties have attracted researchers in the
medical imaging community, and we have seen rapid adoption in many traditional
and novel applications, such as image reconstruction, segmentation, detection,
classification, and cross-modality synthesis. Based on our observations, this
trend will continue and we therefore conducted a review of recent advances in
medical imaging using the adversarial training scheme with the hope of
benefiting researchers interested in this technique.Comment: 24 pages; v4; added missing references from before Jan 1st 2019;
accepted to MedI
Deep artifact learning for compressed sensing and parallel MRI
Purpose: Compressed sensing MRI (CS-MRI) from single and parallel coils is
one of the powerful ways to reduce the scan time of MR imaging with performance
guarantee. However, the computational costs are usually expensive. This paper
aims to propose a computationally fast and accurate deep learning algorithm for
the reconstruction of MR images from highly down-sampled k-space data.
Theory: Based on the topological analysis, we show that the data manifold of
the aliasing artifact is easier to learn from a uniform subsampling pattern
with additional low-frequency k-space data. Thus, we develop deep aliasing
artifact learning networks for the magnitude and phase images to estimate and
remove the aliasing artifacts from highly accelerated MR acquisition.
Methods: The aliasing artifacts are directly estimated from the distorted
magnitude and phase images reconstructed from subsampled k-space data so that
we can get an aliasing-free images by subtracting the estimated aliasing
artifact from corrupted inputs. Moreover, to deal with the globally distributed
aliasing artifact, we develop a multi-scale deep neural network with a large
receptive field.
Results: The experimental results confirm that the proposed deep artifact
learning network effectively estimates and removes the aliasing artifacts.
Compared to existing CS methods from single and multi-coli data, the proposed
network shows minimal errors by removing the coherent aliasing artifacts.
Furthermore, the computational time is by order of magnitude faster.
Conclusion: As the proposed deep artifact learning network immediately
generates accurate reconstruction, it has great potential for clinical
applications
Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network
Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT
are computationally expensive. To address this problem, we recently proposed a
deep convolutional neural network (CNN) for low-dose X-ray CT and won the
second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the
texture were not fully recovered. To address this problem, here we propose a
novel framelet-based denoising algorithm using wavelet residual network which
synergistically combines the expressive power of deep learning and the
performance guarantee from the framelet-based denoising algorithms. The new
algorithms were inspired by the recent interpretation of the deep convolutional
neural network (CNN) as a cascaded convolution framelet signal representation.
Extensive experimental results confirm that the proposed networks have
significantly improved performance and preserves the detail texture of the
original images.Comment: This will appear in IEEE Transaction on Medical Imaging, a special
issue of Machine Learning for Image Reconstructio
Deep Component Analysis via Alternating Direction Neural Networks
Despite a lack of theoretical understanding, deep neural networks have
achieved unparalleled performance in a wide range of applications. On the other
hand, shallow representation learning with component analysis is associated
with rich intuition and theory, but smaller capacity often limits its
usefulness. To bridge this gap, we introduce Deep Component Analysis (DeepCA),
an expressive multilayer model formulation that enforces hierarchical structure
through constraints on latent variables in each layer. For inference, we
propose a differentiable optimization algorithm implemented using recurrent
Alternating Direction Neural Networks (ADNNs) that enable parameter learning
using standard backpropagation. By interpreting feed-forward networks as
single-iteration approximations of inference in our model, we provide both a
novel theoretical perspective for understanding them and a practical technique
for constraining predictions with prior knowledge. Experimentally, we
demonstrate performance improvements on a variety of tasks, including
single-image depth prediction with sparse output constraints
AdaDepth: Unsupervised Content Congruent Adaptation for Depth Estimation
Supervised deep learning methods have shown promising results for the task of
monocular depth estimation; but acquiring ground truth is costly, and prone to
noise as well as inaccuracies. While synthetic datasets have been used to
circumvent above problems, the resultant models do not generalize well to
natural scenes due to the inherent domain shift. Recent adversarial approaches
for domain adaption have performed well in mitigating the differences between
the source and target domains. But these methods are mostly limited to a
classification setup and do not scale well for fully-convolutional
architectures. In this work, we propose AdaDepth - an unsupervised domain
adaptation strategy for the pixel-wise regression task of monocular depth
estimation. The proposed approach is devoid of above limitations through a)
adversarial learning and b) explicit imposition of content consistency on the
adapted target representation. Our unsupervised approach performs competitively
with other established approaches on depth estimation tasks and achieves
state-of-the-art results in a semi-supervised setting.Comment: CVPR 201
CT-To-MR Conditional Generative Adversarial Networks for Ischemic Stroke Lesion Segmentation
Infarcted brain tissue resulting from acute stroke readily shows up as
hyperintense regions within diffusion-weighted magnetic resonance imaging
(DWI). It has also been proposed that computed tomography perfusion (CTP) could
alternatively be used to triage stroke patients, given improvements in speed
and availability, as well as reduced cost. However, CTP has a lower signal to
noise ratio compared to MR. In this work, we investigate whether a conditional
mapping can be learned by a generative adversarial network to map CTP inputs to
generated MR DWI that more clearly delineates hyperintense regions due to
ischemic stroke. We detail the architectures of the generator and discriminator
and describe the training process used to perform image-to-image translation
from multi-modal CT perfusion maps to diffusion weighted MR outputs. We
evaluate the results both qualitatively by visual comparison of generated MR to
ground truth, as well as quantitatively by training fully convolutional neural
networks that make use of generated MR data inputs to perform ischemic stroke
lesion segmentation. Segmentation networks trained using generated CT-to-MR
inputs result in at least some improvement on all metrics used for evaluation,
compared with networks that only use CT perfusion input.Comment: Seventh IEEE International Conference on Healthcare Informatics (ICHI
2019
Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
We introduce a probabilistic approach to unify open set recognition with the
prevention of catastrophic forgetting in deep continual learning, based on
variational Bayesian inference. Our single model combines a joint probabilistic
encoder with a generative model and a linear classifier that get shared across
sequentially arriving tasks. In order to successfully distinguish unseen
unknown data from trained known tasks, we propose to bound the class specific
approximate posterior by fitting regions of high density on the basis of
correctly classified data points. These bounds are further used to
significantly alleviate catastrophic forgetting by avoiding samples from low
density areas in generative replay. Our approach requires neither storing of
old, nor upfront knowledge of future data, and is empirically validated on
visual and audio tasks in class incremental, as well as cross-dataset scenarios
across modalities
- …