2,304 research outputs found
Compressive hyperspectral imaging via adaptive sampling and dictionary learning
In this paper, we propose a new sampling strategy for hyperspectral signals
that is based on dictionary learning and singular value decomposition (SVD).
Specifically, we first learn a sparsifying dictionary from training spectral
data using dictionary learning. We then perform an SVD on the dictionary and
use the first few left singular vectors as the rows of the measurement matrix
to obtain the compressive measurements for reconstruction. The proposed method
provides significant improvement over the conventional compressive sensing
approaches. The reconstruction performance is further improved by
reconditioning the sensing matrix using matrix balancing. We also demonstrate
that the combination of dictionary learning and SVD is robust by applying them
to different datasets
Data-Driven Learning of a Union of Sparsifying Transforms Model for Blind Compressed Sensing
Compressed sensing is a powerful tool in applications such as magnetic
resonance imaging (MRI). It enables accurate recovery of images from highly
undersampled measurements by exploiting the sparsity of the images or image
patches in a transform domain or dictionary. In this work, we focus on blind
compressed sensing (BCS), where the underlying sparse signal model is a priori
unknown, and propose a framework to simultaneously reconstruct the underlying
image as well as the unknown model from highly undersampled measurements.
Specifically, our model is that the patches of the underlying image(s) are
approximately sparse in a transform domain. We also extend this model to a
union of transforms model that better captures the diversity of features in
natural images. The proposed block coordinate descent type algorithms for blind
compressed sensing are highly efficient, and are guaranteed to converge to at
least the partial global and partial local minimizers of the highly non-convex
BCS problems. Our numerical experiments show that the proposed framework
usually leads to better quality of image reconstructions in MRI compared to
several recent image reconstruction methods. Importantly, the learning of a
union of sparsifying transforms leads to better image reconstructions than a
single adaptive transform.Comment: Appears in IEEE Transactions on Computational Imaging, 201
Sparse-View X-Ray CT Reconstruction Using Prior with Learned Transform
A major challenge in X-ray computed tomography (CT) is reducing radiation
dose while maintaining high quality of reconstructed images. To reduce the
radiation dose, one can reduce the number of projection views (sparse-view CT);
however, it becomes difficult to achieve high-quality image reconstruction as
the number of projection views decreases. Researchers have applied the concept
of learning sparse representations from (high-quality) CT image dataset to the
sparse-view CT reconstruction. We propose a new statistical CT reconstruction
model that combines penalized weighted-least squares (PWLS) and prior
with learned sparsifying transform (PWLS-ST-), and a corresponding
efficient algorithm based on Alternating Direction Method of Multipliers
(ADMM). To moderate the difficulty of tuning ADMM parameters, we propose a new
ADMM parameter selection scheme based on approximated condition numbers. We
interpret the proposed model by analyzing the minimum mean square error of its
(-norm relaxed) image update estimator. Our results with the extended
cardiac-torso (XCAT) phantom data and clinical chest data show that, for
sparse-view 2D fan-beam CT and 3D axial cone-beam CT, PWLS-ST- improves
the quality of reconstructed images compared to the CT reconstruction methods
using edge-preserving regularizer and prior with learned ST. These
results also show that, for sparse-view 2D fan-beam CT, PWLS-ST-
achieves comparable or better image quality and requires much shorter runtime
than PWLS-DL using a learned overcomplete dictionary. Our results with clinical
chest data show that, methods using the unsupervised learned prior generalize
better than a state-of-the-art deep "denoising" neural network that does not
use a physical imaging model.Comment: The first two authors contributed equally to this wor
Robust X-ray Sparse-view Phase Tomography via Hierarchical Synthesis Convolutional Neural Networks
Convolutional Neural Networks (CNN) based image reconstruction methods have
been intensely used for X-ray computed tomography (CT) reconstruction
applications. Despite great success, good performance of this data-based
approach critically relies on a representative big training data set and a
dense convoluted deep network. The indiscriminating convolution connections
over all dense layers could be prone to over-fitting, where sampling biases are
wrongly integrated as features for the reconstruction. In this paper, we report
a robust hierarchical synthesis reconstruction approach, where training data is
pre-processed to separate the information on the domains where sampling biases
are suspected. These split bands are then trained separately and combined
successively through a hierarchical synthesis network. We apply the
hierarchical synthesis reconstruction for two important and classical
tomography reconstruction scenarios: the spares-view reconstruction and the
phase reconstruction. Our simulated and experimental results show that
comparable or improved performances are achieved with a dramatic reduction of
network complexity and computational cost. This method can be generalized to a
wide range of applications including material characterization, in-vivo
monitoring and dynamic 4D imaging.Comment: 9 pages, 6 figures, 2 table
Framing U-Net via Deep Convolutional Framelets: Application to Sparse-view CT
X-ray computed tomography (CT) using sparse projection views is a recent
approach to reduce the radiation dose. However, due to the insufficient
projection views, an analytic reconstruction approach using the filtered back
projection (FBP) produces severe streaking artifacts. Recently, deep learning
approaches using large receptive field neural networks such as U-Net have
demonstrated impressive performance for sparse- view CT reconstruction.
However, theoretical justification is still lacking. Inspired by the recent
theory of deep convolutional framelets, the main goal of this paper is,
therefore, to reveal the limitation of U-Net and propose new multi-resolution
deep learning schemes. In particular, we show that the alternative U- Net
variants such as dual frame and the tight frame U-Nets satisfy the so-called
frame condition which make them better for effective recovery of high frequency
edges in sparse view- CT. Using extensive experiments with real patient data
set, we demonstrate that the new network architectures provide better
reconstruction performance.Comment: This will appear in IEEE Transaction on Medical Imaging, a special
issue of Machine Learning for Image Reconstructio
Convolutional Sparse Coding for Compressed Sensing CT Reconstruction
Over the past few years, dictionary learning (DL)-based methods have been
successfully used in various image reconstruction problems. However,
traditional DL-based computed tomography (CT) reconstruction methods are
patch-based and ignore the consistency of pixels in overlapped patches. In
addition, the features learned by these methods always contain shifted versions
of the same features. In recent years, convolutional sparse coding (CSC) has
been developed to address these problems. In this paper, inspired by several
successful applications of CSC in the field of signal processing, we explore
the potential of CSC in sparse-view CT reconstruction. By directly working on
the whole image, without the necessity of dividing the image into overlapped
patches in DL-based methods, the proposed methods can maintain more details and
avoid artifacts caused by patch aggregation. With predetermined filters, an
alternating scheme is developed to optimize the objective function. Extensive
experiments with simulated and real CT data were performed to validate the
effectiveness of the proposed methods. Qualitative and quantitative results
demonstrate that the proposed methods achieve better performance than several
existing state-of-the-art methods.Comment: Accepted by IEEE TM
A Compressed Sensing Algorithm for Sparse-View Pinhole Single Photon Emission Computed Tomography
Single Photon Emission Computed Tomography (SPECT) systems are being developed with multiple cameras and without gantry rotation to provide rapid dynamic acquisitions. However, the resulting data is angularly undersampled, due to the limited number of views. We propose a novel reconstruction algorithm for sparse-view SPECT based on Compressed Sensing (CS) theory. The algorithm models Poisson noise by modifying the Iterative Hard Thresholding algorithm to minimize the Kullback-Leibler (KL) distance by gradient descent. Because the underlying objects of SPECT images are expected to be smooth, a discrete wavelet transform (DWT) using an orthogonal spline wavelet kernel is used as the sparsifying transform. Preliminary feasibility of the algorithm was tested on simulated data of a phantom consisting of two Gaussian distributions. Single-pinhole projection data with Poisson noise were simulated at 128, 60, 15, 10, and 5 views over 360 degrees. Image quality was assessed using the coefficient of variation and the relative contrast between the two objects in the phantom. Overall, the results demonstrate preliminary feasibility of the proposed CS algorithm for sparse-view SPECT imaging
Depth Reconstruction from Sparse Samples: Representation, Algorithm, and Sampling
The rapid development of 3D technology and computer vision applications have
motivated a thrust of methodologies for depth acquisition and estimation.
However, most existing hardware and software methods have limited performance
due to poor depth precision, low resolution and high computational cost. In
this paper, we present a computationally efficient method to recover dense
depth maps from sparse measurements. We make three contributions. First, we
provide empirical evidence that depth maps can be encoded much more sparsely
than natural images by using common dictionaries such as wavelets and
contourlets. We also show that a combined wavelet-contourlet dictionary
achieves better performance than using either dictionary alone. Second, we
propose an alternating direction method of multipliers (ADMM) to achieve fast
reconstruction. A multi-scale warm start procedure is proposed to speed up the
convergence. Third, we propose a two-stage randomized sampling scheme to
optimally choose the sampling locations, thus maximizing the reconstruction
performance for any given sampling budget. Experimental results show that the
proposed method produces high quality dense depth estimates, and is robust to
noisy measurements. Applications to real data in stereo matching are
demonstrated
Accelerating MR Imaging via Deep Chambolle-Pock Network
Compressed sensing (CS) has been introduced to accelerate data acquisition in
MR Imaging. However, CS-MRI methods suffer from detail loss with large
acceleration and complicated parameter selection. To address the limitations of
existing CS-MRI methods, a model-driven MR reconstruction is proposed that
trains a deep network, named CP-net, which is derived from the Chambolle-Pock
algorithm to reconstruct the in vivo MR images of human brains from highly
undersampled complex k-space data acquired on different types of MR scanners.
The proposed deep network can learn the proximal operator and parameters among
the Chambolle-Pock algorithm. All of the experiments show that the proposed
CP-net achieves more accurate MR reconstruction results, outperforming
state-of-the-art methods across various quantitative metrics.Comment: 4 pages, 5 figures, 1 table, Accepted at 2019 IEEE 41st Engineering
in Medicine and Biology Conference (EMBC 2019
Deep Convolutional Framelets: A General Deep Learning Framework for Inverse Problems
Recently, deep learning approaches with various network architectures have
achieved significant performance improvement over existing iterative
reconstruction methods in various imaging problems. However, it is still
unclear why these deep learning architectures work for specific inverse
problems. To address these issues, here we show that the long-searched-for
missing link is the convolution framelets for representing a signal by
convolving local and non-local bases. The convolution framelets was originally
developed to generalize the theory of low-rank Hankel matrix approaches for
inverse problems, and this paper further extends the idea so that we can obtain
a deep neural network using multilayer convolution framelets with perfect
reconstruction (PR) under rectilinear linear unit nonlinearity (ReLU). Our
analysis also shows that the popular deep network components such as residual
block, redundant filter channels, and concatenated ReLU (CReLU) do indeed help
to achieve the PR, while the pooling and unpooling layers should be augmented
with high-pass branches to meet the PR condition. Moreover, by changing the
number of filter channels and bias, we can control the shrinkage behaviors of
the neural network. This discovery leads us to propose a novel theory for deep
convolutional framelets neural network. Using numerical experiments with
various inverse problems, we demonstrated that our deep convolution framelets
network shows consistent improvement over existing deep architectures.This
discovery suggests that the success of deep learning is not from a magical
power of a black-box, but rather comes from the power of a novel signal
representation using non-local basis combined with data-driven local basis,
which is indeed a natural extension of classical signal processing theory.Comment: This will appear in SIAM Journal on Imaging Science
- …