249 research outputs found
On the filtering effect of iterative regularization algorithms for linear least-squares problems
Many real-world applications are addressed through a linear least-squares
problem formulation, whose solution is calculated by means of an iterative
approach. A huge amount of studies has been carried out in the optimization
field to provide the fastest methods for the reconstruction of the solution,
involving choices of adaptive parameters and scaling matrices. However, in
presence of an ill-conditioned model and real data, the need of a regularized
solution instead of the least-squares one changed the point of view in favour
of iterative algorithms able to combine a fast execution with a stable
behaviour with respect to the restoration error. In this paper we want to
analyze some classical and recent gradient approaches for the linear
least-squares problem by looking at their way of filtering the singular values,
showing in particular the effects of scaling matrices and non-negative
constraints in recovering the correct filters of the solution
A new steplength selection for scaled gradient methods with application to image deblurring
Gradient methods are frequently used in large scale image deblurring problems
since they avoid the onerous computation of the Hessian matrix of the objective
function. Second order information is typically sought by a clever choice of
the steplength parameter defining the descent direction, as in the case of the
well-known Barzilai and Borwein rules. In a recent paper, a strategy for the
steplength selection approximating the inverse of some eigenvalues of the
Hessian matrix has been proposed for gradient methods applied to unconstrained
minimization problems. In the quadratic case, this approach is based on a
Lanczos process applied every m iterations to the matrix of the most recent m
back gradients but the idea can be extended to a general objective function. In
this paper we extend this rule to the case of scaled gradient projection
methods applied to non-negatively constrained minimization problems, and we
test the effectiveness of the proposed strategy in image deblurring problems in
both the presence and the absence of an explicit edge-preserving regularization
term
New convergence results for the scaled gradient projection method
The aim of this paper is to deepen the convergence analysis of the scaled
gradient projection (SGP) method, proposed by Bonettini et al. in a recent
paper for constrained smooth optimization. The main feature of SGP is the
presence of a variable scaling matrix multiplying the gradient, which may
change at each iteration. In the last few years, an extensive numerical
experimentation showed that SGP equipped with a suitable choice of the scaling
matrix is a very effective tool for solving large scale variational problems
arising in image and signal processing. In spite of the very reliable numerical
results observed, only a weak, though very general, convergence theorem is
provided, establishing that any limit point of the sequence generated by SGP is
stationary. Here, under the only assumption that the objective function is
convex and that a solution exists, we prove that the sequence generated by SGP
converges to a minimum point, if the scaling matrices sequence satisfies a
simple and implementable condition. Moreover, assuming that the gradient of the
objective function is Lipschitz continuous, we are also able to prove the
O(1/k) convergence rate with respect to the objective function values. Finally,
we present the results of a numerical experience on some relevant image
restoration problems, showing that the proposed scaling matrix selection rule
performs well also from the computational point of view
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Accelerated gradient methods for the X-ray imaging of solar flares
In this paper we present new optimization strategies for the reconstruction
of X-ray images of solar flares by means of the data collected by the Reuven
Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The imaging concept of
the satellite is based of rotating modulation collimator instruments, which
allow the use of both Fourier imaging approaches and reconstruction techniques
based on the straightforward inversion of the modulated count profiles.
Although in the last decade a greater attention has been devoted to the former
strategies due to their very limited computational cost, here we consider the
latter model and investigate the effectiveness of different accelerated
gradient methods for the solution of the corresponding constrained minimization
problem. Moreover, regularization is introduced through either an early
stopping of the iterative procedure, or a Tikhonov term added to the
discrepancy function, by means of a discrepancy principle accounting for the
Poisson nature of the noise affecting the data
Wavelet and FFT Based Image Denoising Using Non-linear Filters
We propose a stationary and discrete wavelet based image denoising scheme and an FFTbased image denoising scheme to remove Gaussian noise. In the first approach, high subbands are added with each other and then soft thresholding is performed. The sum of low subbands is filtered with either piecewise linear (PWL) or Lagrange or spline interpolated PWL filter. In the second approach, FFT is employed on the noisy image and then low frequency and high frequency coefficients are separated with a specified cutoff frequency.Then the inverse of low frequency components is filtered with one of the PWL filters and the inverse of high frequency components is filtered with soft thresholding. The experimental results are compared with Liu and Liu's tensor-based diffusion model (TDM) approach
IR Tools:a MATLAB package of iterative regularization methods and large-scale test problems
This paper describes a new MATLAB software package of iterative regularization methods and test problems for large-scale linear inverse problems. The software package, called IR TOOLS, serves two related purposes: we provide implementations of a range of iterative solvers, including several recently proposed methods that are not available elsewhere, and we provide a set of large-scale test problems in the form of discretizations of 2D linear inverse problems. The solvers include iterative regularization methods where the regularization is due to the semi-convergence of the iterations, Tikhonov-type formulations where the regularization is explicitly formulated in the form of a regularization term, and methods that can impose bound constraints on the computed solutions. All the iterative methods are implemented in a very flexible fashion that allows the problem’s coefficient matrix to be available as a (sparse) matrix, a function handle, or an object. The most basic call to all of the various iterative methods requires only this matrix and the right hand side vector; if the method uses any special stopping criteria, regularization parameters, etc., then default values are set automatically by the code. Moreover, through the use of an optional input structure, the user can also have full control of any of the algorithm parameters. The test problems represent realistic large-scale problems found in image reconstruction and several other applications. Numerical examples illustrate the various algorithms and test problems available in this package.</p
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
We consider a variable metric linesearch based proximal gradient method for
the minimization of the sum of a smooth, possibly nonconvex function plus a
convex, possibly nonsmooth term. We prove convergence of this iterative
algorithm to a critical point if the objective function satisfies the
Kurdyka-Lojasiewicz property at each point of its domain, under the assumption
that a limit point exists. The proposed method is applied to a wide collection
of image processing problems and our numerical tests show that our algorithm
results to be flexible, robust and competitive when compared to recently
proposed approaches able to address the optimization problems arising in the
considered applications
- …