1,336 research outputs found
Monochromatic CT Image Reconstruction from Current-Integrating Data via Deep Learning
In clinical CT, the x-ray source emits polychromatic x-rays, which are
detected in the current-integrating mode. This physical process is accurately
described by an energy-dependent non-linear integral model on the basis of the
Beer-Lambert law. However, the non-linear model is too complicated to be
directly solved for the image reconstruction, and is often approximated to a
linear integral model in the form of the Radon transform, basically ignoring
energy-dependent information. This model approximation would generate
inaccurate quantification of attenuation image and significant beam-hardening
artifacts. In this paper, we develop a deep-learning-based CT image
reconstruction method to address the mismatch of computing model to physical
model. Our method learns a nonlinear transformation from big data to correct
measured projection data to accurately match the linear integral model, realize
monochromatic imaging and overcome beam hardening effectively. The
deep-learning network is trained and tested using clinical dual-energy dataset
to demonstrate the feasibility of the proposed methodology. Results show that
the proposed method can achieve a high accuracy of the projection correction
with a relative error of less than 0.2%
Framing U-Net via Deep Convolutional Framelets: Application to Sparse-view CT
X-ray computed tomography (CT) using sparse projection views is a recent
approach to reduce the radiation dose. However, due to the insufficient
projection views, an analytic reconstruction approach using the filtered back
projection (FBP) produces severe streaking artifacts. Recently, deep learning
approaches using large receptive field neural networks such as U-Net have
demonstrated impressive performance for sparse- view CT reconstruction.
However, theoretical justification is still lacking. Inspired by the recent
theory of deep convolutional framelets, the main goal of this paper is,
therefore, to reveal the limitation of U-Net and propose new multi-resolution
deep learning schemes. In particular, we show that the alternative U- Net
variants such as dual frame and the tight frame U-Nets satisfy the so-called
frame condition which make them better for effective recovery of high frequency
edges in sparse view- CT. Using extensive experiments with real patient data
set, we demonstrate that the new network architectures provide better
reconstruction performance.Comment: This will appear in IEEE Transaction on Medical Imaging, a special
issue of Machine Learning for Image Reconstructio
Dual-energy CT imaging from single-energy CT data with material decomposition convolutional neural network
Dual-energy computed tomography (DECT) is of great significance for clinical
practice due to its huge potential to provide material-specific information.
However, DECT scanners are usually more expensive than standard single-energy
CT (SECT) scanners and thus are less accessible to undeveloped regions. In this
paper, we show that the energy-domain correlation and anatomical consistency
between standard DECT images can be harnessed by a deep learning model to
provide high-performance DECT imaging from fully-sampled low-energy data
together with single-view high-energy data, which can be obtained by using a
scout-view high-energy image. We demonstrate the feasibility of the approach
with contrast-enhanced DECT scans from 5,753 slices of images of twenty-two
patients and show its superior performance on DECT applications. The deep
learning-based approach could be useful to further significantly reduce the
radiation dose of current premium DECT scanners and has the potential to
simplify the hardware of DECT imaging systems and to enable DECT imaging using
standard SECT scanners.Comment: 10 pages, 10 figures, 5 tables. Submitte
DIRECT-Net: a unified mutual-domain material decomposition network for quantitative dual-energy CT imaging
By acquiring two sets of tomographic measurements at distinct X-ray spectra,
the dual-energy CT (DECT) enables quantitative material-specific imaging.
However, the conventionally decomposed material basis images may encounter
severe image noise amplification and artifacts, resulting in degraded image
quality and decreased quantitative accuracy. Iterative DECT image
reconstruction algorithms incorporating either the sinogram or the CT image
prior information have shown potential advantages in noise and artifact
suppression, but with the expense of large computational resource, prolonged
reconstruction time, and tedious manual selections of algorithm parameters. To
partially overcome these limitations, we develop a domain-transformation
enabled end-to-end deep convolutional neural network (DIRECT-Net) to perform
high quality DECT material decomposition. Specifically, the proposed DIRECT-Net
has immediate accesses to mutual-domain data, and utilizes stacked convolution
neural network (CNN) layers for noise reduction and material decomposition. The
training data are numerically simulated based on the underlying physics of DECT
imaging.The XCAT digital phantom, iodine solutions phantom, and biological
specimen are used to validate the performance of DIRECT-Net. The qualitative
and quantitative results demonstrate that this newly developed DIRECT-Net is
promising in suppressing noise, improving image accuracy, and reducing
computation time for future DECT imaging
A material decomposition method for dual-energy CT via dual interactive Wasserstein generative adversarial networks
Dual-energy computed tomography has great potential in material
characterization and identification, whereas the reconstructed
material-specific images always suffer from magnified noise and beam hardening
artifacts. In this study, a data-driven approach using dual interactive
Wasserstein generative adversarial networks is proposed to improve the material
decomposition accuracy. Specifically, two interactive generators are used to
synthesize the corresponding material images and different loss functions for
training the decomposition model are incorporated to preserve texture and edges
in the generated images. Besides, a selector is employed to ensure the
modelling ability of two generators. The results from both the simulation
phantoms and real data demonstrate the advantages of this method in suppressing
the noise and beam hardening artifacts.Comment: 40 pages, 10 figures, research articl
Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network
Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT
are computationally expensive. To address this problem, we recently proposed a
deep convolutional neural network (CNN) for low-dose X-ray CT and won the
second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the
texture were not fully recovered. To address this problem, here we propose a
novel framelet-based denoising algorithm using wavelet residual network which
synergistically combines the expressive power of deep learning and the
performance guarantee from the framelet-based denoising algorithms. The new
algorithms were inspired by the recent interpretation of the deep convolutional
neural network (CNN) as a cascaded convolution framelet signal representation.
Extensive experimental results confirm that the proposed networks have
significantly improved performance and preserves the detail texture of the
original images.Comment: This will appear in IEEE Transaction on Medical Imaging, a special
issue of Machine Learning for Image Reconstructio
Dual-energy CT imaging using a single-energy CT data is feasible via deep learning
In a standard computed tomography (CT) image, pixels having the same
Hounsfield Units (HU) can correspond to different materials and it is,
therefore, challenging to differentiate and quantify materials. Dual-energy CT
(DECT) is desirable to differentiate multiple materials, but DECT scanners are
not widely available as single-energy CT (SECT) scanners. Here we develop a
deep learning approach to perform DECT imaging by using standard SECT data. A
deep learning model to map low-energy image to high-energy image using a
two-stage convolutional neural network (CNN) is developed. The model was
evaluated using patients who received contrast-enhanced abdomen DECT scan with
a popular DE application: virtual non-contrast (VNC) imaging and contrast
quantification. The HU differences between the predicted and original
high-energy CT images are 3.47, 2.95, 2.38 and 2.40 HU for ROIs on the spine,
aorta, liver, and stomach, respectively. The HU differences between VNC images
obtained from original DECT and deep learning DECT are 4.10, 3.75, 2.33 and
2.92 HU for the 4 ROIs, respectively. The aorta iodine quantification
difference between iodine maps obtained from original DECT and deep learning
DECT images is 0.9\%, suggesting high consistency between the predicted and the
original high-energy CT images. This study demonstrates that highly accurate
DECT imaging with single low-energy data is achievable by using a deep learning
approach. The proposed method can significantly simplify the DECT system
design, reducing the scanning dose and imaging cost.Comment: 7 pages, 3 figure
Machine-learning-based nonlinear decomposition of CT images for metal artifact reduction
Computed tomography (CT) images containing metallic objects commonly show
severe streaking and shadow artifacts. Metal artifacts are caused by nonlinear
beam-hardening effects combined with other factors such as scatter and Poisson
noise. In this paper, we propose an implant-specific method that extracts
beam-hardening artifacts from CT images without affecting the background image.
We found that in cases where metal is inserted in the water (or tissue), the
generated beam-hardening artifacts can be approximately extracted by
subtracting artifacts generated exclusively by metals. We used a deep learning
technique to train nonlinear representations of beam-hardening artifacts
arising from metals, which appear as shadows and streaking artifacts. The
proposed network is not designed to identify ground-truth CT images (i.e., the
CT image before its corruption by metal artifacts). Consequently, these images
are not required for training. The proposed method was tested on a dataset
consisting of real CT scans of pelvises containing simulated hip prostheses.
The results demonstrate that the proposed deep learning method successfully
extracts both shadowing and streaking artifacts
Deep Convolutional Framelets: A General Deep Learning Framework for Inverse Problems
Recently, deep learning approaches with various network architectures have
achieved significant performance improvement over existing iterative
reconstruction methods in various imaging problems. However, it is still
unclear why these deep learning architectures work for specific inverse
problems. To address these issues, here we show that the long-searched-for
missing link is the convolution framelets for representing a signal by
convolving local and non-local bases. The convolution framelets was originally
developed to generalize the theory of low-rank Hankel matrix approaches for
inverse problems, and this paper further extends the idea so that we can obtain
a deep neural network using multilayer convolution framelets with perfect
reconstruction (PR) under rectilinear linear unit nonlinearity (ReLU). Our
analysis also shows that the popular deep network components such as residual
block, redundant filter channels, and concatenated ReLU (CReLU) do indeed help
to achieve the PR, while the pooling and unpooling layers should be augmented
with high-pass branches to meet the PR condition. Moreover, by changing the
number of filter channels and bias, we can control the shrinkage behaviors of
the neural network. This discovery leads us to propose a novel theory for deep
convolutional framelets neural network. Using numerical experiments with
various inverse problems, we demonstrated that our deep convolution framelets
network shows consistent improvement over existing deep architectures.This
discovery suggests that the success of deep learning is not from a magical
power of a black-box, but rather comes from the power of a novel signal
representation using non-local basis combined with data-driven local basis,
which is indeed a natural extension of classical signal processing theory.Comment: This will appear in SIAM Journal on Imaging Science
Pseudo Dual Energy CT Imaging using Deep Learning Based Framework: Initial Study
Dual energy computed tomography (DECT) has become of particular interest in
clinic recent years. The DECT scan comprises two images, corresponding to two
photon attenuation coefficients maps of the objects. Meanwhile, the DECT images
are less accessible sometimes, compared to the conventional single energy CT
(SECT). This motivates us to simulate pseudo DECT (pDECT) images from the SECT
images. Inspired by recent advances in deep learning, we present a deep
learning based framework to yield pDECT images from SECT images, utilizing the
intrinsic characteristics underlying DECT images, i.e., global correlation and
high similarity. To demonstrate the performance of the deep learning based
framework, a cascade deep ConvNet (CD-ConvNet) approach is specifically
presented in the deep learning framework. In the training step, the CD-ConvNet
is designed to learn the non-linear mapping from the measured energy-specific
(i.e., low-energy) CT images to the desired energy-specific (i.e., high-energy)
CT images. In the testing step, the trained CD-ConvNet can be used to yield
desired high-energy CT images from the low-energy CT images, and then produce
accurate basic material maps. Clinical patient data were employed to validate
and evaluate the presented CD-ConvNet approach performance. Both visual and
quantitative results demonstrate the presented CD-ConvNet approach can yield
high quality pDECT images and basic material maps.Comment: 5 pages, 6 figure
- …