43 research outputs found
Multilevel Approach For Signal Restoration Problems With Toeplitz Matrices
We present a multilevel method for discrete ill-posed problems arising from the discretization of Fredholm integral equations of the first kind. In this method, we use the Haar wavelet transform to define restriction and prolongation operators within a multigrid-type iteration. The choice of the Haar wavelet operator has the advantage of preserving matrix structure, such as Toeplitz, between grids, which can be exploited to obtain faster solvers on each level where an edge-preserving Tikhonov regularization is applied. Finally, we present results that indicate the promise of this approach for restoration of signals and images with edges
A Tensor-Based Dictionary Learning Approach to Tomographic Image Reconstruction
We consider tomographic reconstruction using priors in the form of a
dictionary learned from training images. The reconstruction has two stages:
first we construct a tensor dictionary prior from our training data, and then
we pose the reconstruction problem in terms of recovering the expansion
coefficients in that dictionary. Our approach differs from past approaches in
that a) we use a third-order tensor representation for our images and b) we
recast the reconstruction problem using the tensor formulation. The dictionary
learning problem is presented as a non-negative tensor factorization problem
with sparsity constraints. The reconstruction problem is formulated in a convex
optimization framework by looking for a solution with a sparse representation
in the tensor dictionary. Numerical results show that our tensor formulation
leads to very sparse representations of both the training images and the
reconstructions due to the ability of representing repeated features compactly
in the dictionary.Comment: 29 page
"Plug-and-Play" Edge-Preserving Regularization
In many inverse problems it is essential to use regularization methods that
preserve edges in the reconstructions, and many reconstruction models have been
developed for this task, such as the Total Variation (TV) approach. The
associated algorithms are complex and require a good knowledge of large-scale
optimization algorithms, and they involve certain tolerances that the user must
choose. We present a simpler approach that relies only on standard
computational building blocks in matrix computations, such as orthogonal
transformations, preconditioned iterative solvers, Kronecker products, and the
discrete cosine transform -- hence the term "plug-and-play." We do not attempt
to improve on TV reconstructions, but rather provide an easy-to-use approach to
computing reconstructions with similar properties.Comment: 14 pages, 7 figures, 3 table
CAUCHY-LIKE PRECONDITIONERS FOR 2-DIMENSIONAL ILL-POSED PROBLEMS
Ill-conditioned matrices with block Toeplitz, Toeplitz block (BTTB)
structure arise from the discretization of certain ill-posed problems in
signal and image processing. We use a preconditioned conjugate gradient
algorithm to compute a regularized solution to this linear system given
noisy data. Our preconditioner is a Cauchy-like block diagonal
approximation to an orthogonal transformation of the BTTB matrix.
We show the preconditioner has desirable properties when the kernel of the
ill-posed problem is smooth: the largest singular values of the
preconditioned matrix are clustered around one, the smallest singular
values remain small, and the subspaces corresponding to the largest and
smallest singular values, respectively, remain unmixed. For a system
involving variables, the preconditioned algorithm costs only
operations per iteration. We demonstrate the
effectiveness of the preconditioner on three examples
Symmetric Cauchy-like Preconditioners for the Regularized Solution of 1-D Ill-Posed Problems
The discretization of integral equations can lead to systems involving
symmetric Toeplitz matrices.
We describe a preconditioning technique for the regularized solution of
the related discrete ill-posed problem. We use discrete sine transforms
to transform the system to one involving a Cauchy-like matrix. Based on
the approach of Kilmer and O'Leary, the
preconditioner is a symmetric, rank approximation to the
Cauchy-like matrix
augmented by the identity.
We shall show that if the kernel of the integral equation is smooth
then the preconditioned matrix has two desirable properties; namely, the
largest magnitude eigenvalues are clustered around and bounded
below by one, and that small magnitude eigenvalues remain small. We also
show that
the initialization cost is less than the initialization cost for the
preconditioner introduced by Kilmer and O'Leary.
Further, we describe a method for applying the preconditioner in
operations when is a power of 2, and describe a
variant of the MINRES algorithm to solve the symmetrically preconditioned
problem. The preconditioned method is tested on two examples