342 research outputs found
Steplength selection in gradient projection methods for box-constrained quadratic programs
The role of the steplength selection strategies in gradient methods has been widely in- vestigated in the last decades. Starting from the work of Barzilai and Borwein (1988), many efficient steplength rules have been designed, that contributed to make the gradient approaches an effective tool for the large-scale optimization problems arising in important real-world applications. Most of these steplength rules have been thought in unconstrained optimization, with the aim of exploiting some second-order information for achieving a fast annihilation of the gradient of the objective function. However, these rules are successfully used also within gradient projection methods for constrained optimization, though, to our knowledge, a detailed analysis of the effects of the constraints on the steplength selections is still not available. In this work we investigate how the presence of the box constraints affects the spectral properties of the Barzilai\u2013Borwein rules in quadratic programming problems. The proposed analysis suggests the introduction of new steplength selection strategies specifically designed for taking account of the active constraints at each iteration. The results of a set of numerical experiments show the effectiveness of the new rules with respect to other state of the art steplength selections and their potential usefulness also in case of box-constrained non-quadratic optimization problems
On the filtering effect of iterative regularization algorithms for linear least-squares problems
Many real-world applications are addressed through a linear least-squares
problem formulation, whose solution is calculated by means of an iterative
approach. A huge amount of studies has been carried out in the optimization
field to provide the fastest methods for the reconstruction of the solution,
involving choices of adaptive parameters and scaling matrices. However, in
presence of an ill-conditioned model and real data, the need of a regularized
solution instead of the least-squares one changed the point of view in favour
of iterative algorithms able to combine a fast execution with a stable
behaviour with respect to the restoration error. In this paper we want to
analyze some classical and recent gradient approaches for the linear
least-squares problem by looking at their way of filtering the singular values,
showing in particular the effects of scaling matrices and non-negative
constraints in recovering the correct filters of the solution
A new steplength selection for scaled gradient methods with application to image deblurring
Gradient methods are frequently used in large scale image deblurring problems
since they avoid the onerous computation of the Hessian matrix of the objective
function. Second order information is typically sought by a clever choice of
the steplength parameter defining the descent direction, as in the case of the
well-known Barzilai and Borwein rules. In a recent paper, a strategy for the
steplength selection approximating the inverse of some eigenvalues of the
Hessian matrix has been proposed for gradient methods applied to unconstrained
minimization problems. In the quadratic case, this approach is based on a
Lanczos process applied every m iterations to the matrix of the most recent m
back gradients but the idea can be extended to a general objective function. In
this paper we extend this rule to the case of scaled gradient projection
methods applied to non-negatively constrained minimization problems, and we
test the effectiveness of the proposed strategy in image deblurring problems in
both the presence and the absence of an explicit edge-preserving regularization
term
A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables
We propose a gradient-based method for quadratic programming problems with a
single linear constraint and bounds on the variables. Inspired by the GPCG
algorithm for bound-constrained convex quadratic programming [J.J. Mor\'e and
G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases
until convergence: an identification phase, which performs gradient projection
iterations until either a candidate active set is identified or no reasonable
progress is made, and an unconstrained minimization phase, which reduces the
objective function in a suitable space defined by the identification phase, by
applying either the conjugate gradient method or a recently proposed spectral
gradient method. However, the algorithm differs from GPCG not only because it
deals with a more general class of problems, but mainly for the way it stops
the minimization phase. This is based on a comparison between a measure of
optimality in the reduced space and a measure of bindingness of the variables
that are on the bounds, defined by extending the concept of proportioning,
which was proposed by some authors for box-constrained problems. If the
objective function is bounded, the algorithm converges to a stationary point
thanks to a suitable application of the gradient projection method in the
identification phase. For strictly convex problems, the algorithm converges to
the optimal solution in a finite number of steps even in case of degeneracy.
Extensive numerical experiments show the effectiveness of the proposed
approach.Comment: 30 pages, 17 figure
New convergence results for the scaled gradient projection method
The aim of this paper is to deepen the convergence analysis of the scaled
gradient projection (SGP) method, proposed by Bonettini et al. in a recent
paper for constrained smooth optimization. The main feature of SGP is the
presence of a variable scaling matrix multiplying the gradient, which may
change at each iteration. In the last few years, an extensive numerical
experimentation showed that SGP equipped with a suitable choice of the scaling
matrix is a very effective tool for solving large scale variational problems
arising in image and signal processing. In spite of the very reliable numerical
results observed, only a weak, though very general, convergence theorem is
provided, establishing that any limit point of the sequence generated by SGP is
stationary. Here, under the only assumption that the objective function is
convex and that a solution exists, we prove that the sequence generated by SGP
converges to a minimum point, if the scaling matrices sequence satisfies a
simple and implementable condition. Moreover, assuming that the gradient of the
objective function is Lipschitz continuous, we are also able to prove the
O(1/k) convergence rate with respect to the objective function values. Finally,
we present the results of a numerical experience on some relevant image
restoration problems, showing that the proposed scaling matrix selection rule
performs well also from the computational point of view
An Inexact Successive Quadratic Approximation Method for Convex L-1 Regularized Optimization
We study a Newton-like method for the minimization of an objective function
that is the sum of a smooth convex function and an l-1 regularization term.
This method, which is sometimes referred to in the literature as a proximal
Newton method, computes a step by minimizing a piecewise quadratic model of the
objective function. In order to make this approach efficient in practice, it is
imperative to perform this inner minimization inexactly. In this paper, we give
inexactness conditions that guarantee global convergence and that can be used
to control the local rate of convergence of the iteration. Our inexactness
conditions are based on a semi-smooth function that represents a (continuous)
measure of the optimality conditions of the problem, and that embodies the
soft-thresholding iteration. We give careful consideration to the algorithm
employed for the inner minimization, and report numerical results on two test
sets originating in machine learning
- …