14 research outputs found
A comparison of parameter choice rules for â„“p - â„“q minimization
Images that have been contaminated by various kinds of blur and noise can be restored by the minimization of an â„“p-â„“q functional. The quality of the reconstruction depends on the choice of a regularization parameter. Several approaches to determine this parameter have been described in the literature. This work presents a numerical comparison of known approaches as well as of a new one
Fractional graph Laplacian for image reconstruction
Image reconstruction problems, like image deblurring and computer tomography, are usually ill-posed and require regularization. A popular approach to regularization is to substitute the original problem with an optimization problem that minimizes the sum of two terms, an
term and an
term with
. The first penalizes the distance between the measured data and the reconstructed one, the latter imposes sparsity on some features of the computed solution.
In this work, we propose to use the fractional Laplacian of a properly constructed graph in the
term to compute extremely accurate reconstructions of the desired images. A simple model with a fully automatic method, i.e., that does not require the tuning of any parameter, is used to construct the graph and enhanced diffusion on the graph is achieved with the use of a fractional exponent in the Laplacian operator. Since the fractional Laplacian is a global operator, i.e., its matrix representation is completely full, it cannot be formed and stored. We propose to replace it with an approximation in an appropriate Krylov subspace. We show that the algorithm is a regularization method under some reasonable assumptions. Some selected numerical examples in image deblurring and computer tomography show the performance of our proposal
On the choice of regularization matrix for an â„“2-â„“ minimization method for image restoration
Ill-posed problems arise in many areas of science and engineering. Their solutions, if they exist, are very sensitive to perturbations in the data. To reduce this sensitivity, the original problem may be replaced by a minimization problem with a fidelity term and a regularization term. We consider minimization problems of this kind, in which the fidelity term is the square of the â„“2-norm of a discrepancy and the regularization term is the qth power of the â„“q-norm of the size of the computed solution measured in some manner. We are interested in the situation when
Krylov subspace split Bregman methods
Split Bregman methods are popular iterative methods for the solution of large-scale minimization problems that arise in image restoration and basis pursuit. This paper investigates the possibility of projecting large-scale problems into a Krylov subspace of fairly small dimension and solving the minimization problem in the latter subspace by a split Bregman algorithm. We are concerned with the restoration of images that have been contaminated by blur and Gaussian or impulse noise. Computed examples illustrate that the projected split Bregman methods described are fast and give computed solutions of high quality
Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum
This paper applies an idea of adaptive momentum for the nonlinear conjugate
gradient to accelerate optimization problems in sparse recovery. Specifically,
we consider two types of minimization problems: a (single) differentiable
function and the sum of a non-smooth function and a differentiable function. In
the first case, we adopt a fixed step size to avoid the traditional line search
and establish the convergence analysis of the proposed algorithm for a
quadratic problem. This acceleration is further incorporated with an operator
splitting technique to deal with the non-smooth function in the second case. We
use the convex and the nonconvex functionals as two
case studies to demonstrate the efficiency of the proposed approaches over
traditional methods
Regularization matrices for discrete ill-posed problems in several space-dimensions
Many applications in science and engineering require the solution of large linear discrete ill-posed problems that are obtained by the discretization of a Fredholm integral equation of the first kind in several space dimensions. The matrix that defines these problems is very ill conditioned and generally numerically singular, and the right-hand side, which represents measured data, is typically contaminated by measurement error. Straightforward solution of these problems is generally not meaningful due to severe error propagation. Tikhonov regularization seeks to alleviate this difficulty by replacing the given linear discrete ill-posed problem by a penalized least-squares problem, whose solution is less sensitive to the error in the right-hand side and to roundoff errors introduced during the computations. This paper discusses the construction of penalty terms that are determined by solving a matrix nearness problem. These penalty terms allow partial transformation to standard form of Tikhonov regularization problems that stem from the discretization of integral equations on a cube in several space dimensions