244 research outputs found
A new steplength selection for scaled gradient methods with application to image deblurring
Gradient methods are frequently used in large scale image deblurring problems
since they avoid the onerous computation of the Hessian matrix of the objective
function. Second order information is typically sought by a clever choice of
the steplength parameter defining the descent direction, as in the case of the
well-known Barzilai and Borwein rules. In a recent paper, a strategy for the
steplength selection approximating the inverse of some eigenvalues of the
Hessian matrix has been proposed for gradient methods applied to unconstrained
minimization problems. In the quadratic case, this approach is based on a
Lanczos process applied every m iterations to the matrix of the most recent m
back gradients but the idea can be extended to a general objective function. In
this paper we extend this rule to the case of scaled gradient projection
methods applied to non-negatively constrained minimization problems, and we
test the effectiveness of the proposed strategy in image deblurring problems in
both the presence and the absence of an explicit edge-preserving regularization
term
Nested Distributed Gradient Methods with Adaptive Quantized Communication
In this paper, we consider minimizing a sum of local convex objective
functions in a distributed setting, where communication can be costly. We
propose and analyze a class of nested distributed gradient methods with
adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing
multiple quantized communication steps on the rate of convergence and on the
size of the neighborhood of convergence, and prove R-Linear convergence to the
exact solution with increasing number of consensus steps and adaptive
quantization. We test the performance of the method, as well as some practical
variants, on quadratic functions, and show the effects of multiple quantized
communication steps in terms of iterations/gradient evaluations, communication
and cost.Comment: 9 pages, 2 figures. arXiv admin note: text overlap with
arXiv:1709.0299
A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables
We propose a gradient-based method for quadratic programming problems with a
single linear constraint and bounds on the variables. Inspired by the GPCG
algorithm for bound-constrained convex quadratic programming [J.J. Mor\'e and
G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases
until convergence: an identification phase, which performs gradient projection
iterations until either a candidate active set is identified or no reasonable
progress is made, and an unconstrained minimization phase, which reduces the
objective function in a suitable space defined by the identification phase, by
applying either the conjugate gradient method or a recently proposed spectral
gradient method. However, the algorithm differs from GPCG not only because it
deals with a more general class of problems, but mainly for the way it stops
the minimization phase. This is based on a comparison between a measure of
optimality in the reduced space and a measure of bindingness of the variables
that are on the bounds, defined by extending the concept of proportioning,
which was proposed by some authors for box-constrained problems. If the
objective function is bounded, the algorithm converges to a stationary point
thanks to a suitable application of the gradient projection method in the
identification phase. For strictly convex problems, the algorithm converges to
the optimal solution in a finite number of steps even in case of degeneracy.
Extensive numerical experiments show the effectiveness of the proposed
approach.Comment: 30 pages, 17 figure
Global and Quadratic Convergence of Newton Hard-Thresholding Pursuit
Algorithms based on the hard thresholding principle have been well studied
with sounding theoretical guarantees in the compressed sensing and more general
sparsity-constrained optimization. It is widely observed in existing empirical
studies that when a restricted Newton step was used (as the debiasing step),
the hard-thresholding algorithms tend to meet halting conditions in a
significantly low number of iterations and are very efficient. Hence, the thus
obtained Newton hard-thresholding algorithms call for stronger theoretical
guarantees than for their simple hard-thresholding counterparts. This paper
provides a theoretical justification for the use of the restricted Newton step.
We build our theory and algorithm, Newton Hard-Thresholding Pursuit (NHTP), for
the sparsity-constrained optimization. Our main result shows that NHTP is
quadratically convergent under the standard assumption of restricted strong
convexity and smoothness. We also establish its global convergence to a
stationary point under a weaker assumption. In the special case of the
compressive sensing, NHTP effectively reduces to some of the existing
hard-thresholding algorithms with a Newton step. Consequently, our fast
convergence result justifies why those algorithms perform better than without
the Newton step. The efficiency of NHTP was demonstrated on both synthetic and
real data in compressed sensing and sparse logistic regression
- …