5,966 research outputs found
Computationally Efficient Deep Neural Network for Computed Tomography Image Reconstruction
Deep-neural-network-based image reconstruction has demonstrated promising
performance in medical imaging for under-sampled and low-dose scenarios.
However, it requires large amount of memory and extensive time for the
training. It is especially challenging to train the reconstruction networks for
three-dimensional computed tomography (CT) because of the high resolution of CT
images. The purpose of this work is to reduce the memory and time consumption
of the training of the reconstruction networks for CT to make it practical for
current hardware, while maintaining the quality of the reconstructed images.
We unrolled the proximal gradient descent algorithm for iterative image
reconstruction to finite iterations and replaced the terms related to the
penalty function with trainable convolutional neural networks (CNN). The
network was trained greedily iteration by iteration in the image-domain on
patches, which requires reasonable amount of memory and time on mainstream
graphics processing unit (GPU). To overcome the local-minimum problem caused by
greedy learning, we used deep UNet as the CNN and incorporated separable
quadratic surrogate with ordered subsets for data fidelity, so that the
solution could escape from easy local minimums and achieve better image
quality.
The proposed method achieved comparable image quality with state-of-the-art
neural network for CT image reconstruction on 2D sparse-view and limited-angle
problems on the low-dose CT challenge dataset.Comment: 33 pages, 14 figures, accepted by Medical Physic
Non-local Low-rank Cube-based Tensor Factorization for Spectral CT Reconstruction
Spectral computed tomography (CT) reconstructs material-dependent attenuation
images with the projections of multiple narrow energy windows, it is meaningful
for material identification and decomposition. Unfortunately, the multi-energy
projection dataset always contains strong complicated noise and result in the
projections has a lower signal-noise-ratio (SNR). Very recently, the
spatial-spectral cube matching frame (SSCMF) was proposed to explore the
non-local spatial-spectrum similarities for spectral CT. The method constructs
such a group by clustering up a series of non-local spatial-spectrum cubes. The
small size of spatial patch for such a group make SSCMF fails to encode the
sparsity and low-rank properties. In addition, the hard-thresholding and
collaboration filtering operation in the SSCMF are also rough to recover the
image features and spatial edges. While for all steps are operated on 4-D
group, we may not afford such huge computational and memory load in practical.
To avoid the above limitation and further improve image quality, we first
formulate a non-local cube-based tensor instead of the group to encode the
sparsity and low-rank properties. Then, as a new regularizer,
Kronecker-Basis-Representation (KBR) tensor factorization is employed into a
basic spectral CT reconstruction model to enhance the ability of extracting
image features and protecting spatial edges, generating the non-local low-rank
cube-based tensor factorization (NLCTF) method. Finally, the split-Bregman
strategy is adopted to solve the NLCTF model. Both numerical simulations and
realistic preclinical mouse studies are performed to validate and assess the
NLCTF algorithm. The results show that the NLCTF method outperforms the other
competitors
Super-resolution MRI through Deep Learning
Magnetic resonance imaging (MRI) is extensively used for diagnosis and
image-guided therapeutics. Due to hardware, physical and physiological
limitations, acquisition of high-resolution MRI data takes long scan time at
high system cost, and could be limited to low spatial coverage and also subject
to motion artifacts. Super-resolution MRI can be achieved with deep learning,
which is a promising approach and has a great potential for preclinical and
clinical imaging. Compared with polynomial interpolation or sparse-coding
algorithms, deep learning extracts prior knowledge from big data and produces
superior MRI images from a low-resolution counterpart. In this paper, we adapt
two state-of-the-art neural network models for CT denoising and deblurring,
transfer them for super-resolution MRI, and demonstrate encouraging
super-resolution MRI results toward two-fold resolution enhancement
Generative Adversarial Network in Medical Imaging: A Review
Generative adversarial networks have gained a lot of attention in the
computer vision community due to their capability of data generation without
explicitly modelling the probability density function. The adversarial loss
brought by the discriminator provides a clever way of incorporating unlabeled
samples into training and imposing higher order consistency. This has proven to
be useful in many cases, such as domain adaptation, data augmentation, and
image-to-image translation. These properties have attracted researchers in the
medical imaging community, and we have seen rapid adoption in many traditional
and novel applications, such as image reconstruction, segmentation, detection,
classification, and cross-modality synthesis. Based on our observations, this
trend will continue and we therefore conducted a review of recent advances in
medical imaging using the adversarial training scheme with the hope of
benefiting researchers interested in this technique.Comment: 24 pages; v4; added missing references from before Jan 1st 2019;
accepted to MedI
Comparison of projection domain, image domain, and comprehensive deep learning for sparse-view X-ray CT image reconstruction
X-ray Computed Tomography (CT) imaging has been widely used in clinical
diagnosis, non-destructive examination, and public safety inspection.
Sparse-view (sparse view) CT has great potential in radiation dose reduction
and scan acceleration. However, sparse view CT data is insufficient and
traditional reconstruction results in severe streaking artifacts. In this work,
based on deep learning, we compared image reconstruction performance for sparse
view CT reconstruction with projection domain network, image domain network,
and comprehensive network combining projection and image domains. Our study is
executed with numerical simulated projection of CT images from real scans.
Results demonstrated deep learning networks can effectively reconstruct rich
high frequency structural information without streaking artefact commonly seen
in sparse view CT. A comprehensive network combining deep learning in both
projection domain and image domain can get best results
Convolutional Sparse Coding for Compressed Sensing CT Reconstruction
Over the past few years, dictionary learning (DL)-based methods have been
successfully used in various image reconstruction problems. However,
traditional DL-based computed tomography (CT) reconstruction methods are
patch-based and ignore the consistency of pixels in overlapped patches. In
addition, the features learned by these methods always contain shifted versions
of the same features. In recent years, convolutional sparse coding (CSC) has
been developed to address these problems. In this paper, inspired by several
successful applications of CSC in the field of signal processing, we explore
the potential of CSC in sparse-view CT reconstruction. By directly working on
the whole image, without the necessity of dividing the image into overlapped
patches in DL-based methods, the proposed methods can maintain more details and
avoid artifacts caused by patch aggregation. With predetermined filters, an
alternating scheme is developed to optimize the objective function. Extensive
experiments with simulated and real CT data were performed to validate the
effectiveness of the proposed methods. Qualitative and quantitative results
demonstrate that the proposed methods achieve better performance than several
existing state-of-the-art methods.Comment: Accepted by IEEE TM
Iterative PET Image Reconstruction Using Convolutional Neural Network Representation
PET image reconstruction is challenging due to the ill-poseness of the
inverse problem and limited number of detected photons. Recently deep neural
networks have been widely and successfully used in computer vision tasks and
attracted growing interests in medical imaging. In this work, we trained a deep
residual convolutional neural network to improve PET image quality by using the
existing inter-patient information. An innovative feature of the proposed
method is that we embed the neural network in the iterative reconstruction
framework for image representation, rather than using it as a post-processing
tool. We formulate the objective function as a constraint optimization problem
and solve it using the alternating direction method of multipliers (ADMM)
algorithm. Both simulation data and hybrid real data are used to evaluate the
proposed method. Quantification results show that our proposed iterative neural
network method can outperform the neural network denoising and conventional
penalized maximum likelihood methods
PWLS-ULTRA: An Efficient Clustering and Learning-Based Approach for Low-Dose 3D CT Image Reconstruction
The development of computed tomography (CT) image reconstruction methods that
significantly reduce patient radiation exposure while maintaining high image
quality is an important area of research in low-dose CT (LDCT) imaging. We
propose a new penalized weighted least squares (PWLS) reconstruction method
that exploits regularization based on an efficient Union of Learned TRAnsforms
(PWLS-ULTRA). The union of square transforms is pre-learned from numerous image
patches extracted from a dataset of CT images or volumes. The proposed
PWLS-based cost function is optimized by alternating between a CT image
reconstruction step, and a sparse coding and clustering step. The CT image
reconstruction step is accelerated by a relaxed linearized augmented Lagrangian
method with ordered-subsets that reduces the number of forward and back
projections. Simulations with 2-D and 3-D axial CT scans of the extended
cardiac-torso phantom and 3D helical chest and abdomen scans show that for both
normal-dose and low-dose levels, the proposed method significantly improves the
quality of reconstructed images compared to PWLS reconstruction with a
nonadaptive edge-preserving regularizer (PWLS-EP). PWLS with regularization
based on a union of learned transforms leads to better image reconstructions
than using a single learned square transform. We also incorporate patch-based
weights in PWLS-ULTRA that enhance image quality and help improve image
resolution uniformity. The proposed approach achieves comparable or better
image quality compared to learned overcomplete synthesis dictionaries, but
importantly, is much faster (computationally more efficient).Comment: Accepted to IEEE Transaction on Medical Imagin
X2CT-GAN: Reconstructing CT from Biplanar X-Rays with Generative Adversarial Networks
Computed tomography (CT) can provide a 3D view of the patient's internal
organs, facilitating disease diagnosis, but it incurs more radiation dose to a
patient and a CT scanner is much more cost prohibitive than an X-ray machine
too. Traditional CT reconstruction methods require hundreds of X-ray
projections through a full rotational scan of the body, which cannot be
performed on a typical X-ray machine. In this work, we propose to reconstruct
CT from two orthogonal X-rays using the generative adversarial network (GAN)
framework. A specially designed generator network is exploited to increase data
dimension from 2D (X-rays) to 3D (CT), which is not addressed in previous
research of GAN. A novel feature fusion method is proposed to combine
information from two X-rays.The mean squared error (MSE) loss and adversarial
loss are combined to train the generator, resulting in a high-quality CT volume
both visually and quantitatively. Extensive experiments on a publicly available
chest CT dataset demonstrate the effectiveness of the proposed method. It could
be a nice enhancement of a low-cost X-ray machine to provide physicians a
CT-like 3D volume in several niche applications
Real-Time 2D-3D Deformable Registration with Deep Learning and Application to Lung Radiotherapy Targeting
Radiation therapy presents a need for dynamic tracking of a target tumor
volume. Fiducial markers such as implanted gold seeds have been used to gate
radiation delivery but the markers are invasive and gating significantly
increases treatment time. Pretreatment acquisition of a respiratory correlated
4DCT allows for determination of accurate motion tracking which is useful in
treatment planning. We design a patient-specific motion subspace and a deep
convolutional neural network to recover anatomical positions from a single
fluoroscopic projection in real-time. We use this deep network to approximate
the nonlinear inverse of a diffeomorphic deformation composed with radiographic
projection. This network recovers subspace coordinates to define the
patient-specific deformation of the lungs from a baseline anatomic position.
The geometric accuracy of the subspace deformations on real patient data is
similar to accuracy attained by original image registration between individual
respiratory-phase image volumes
- …