71 research outputs found

    Nonmonotone spectral projected gradient methods on convex sets

    Get PDF
    Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical projected gradient schemes are extended to include a nonmonotone steplength strategy that is based on the Grippo-Lampariello-Lucidi nonmonotone line search. In particular, the nonmonotone strategy is combined with the spectral gradient choice of steplength to accelerate the convergence process. In addition to the classical projected gradient nonlinear path, the feasible spectral projected gradient is used as a search direction to avoid additional trial projections during the one-dimensional search process. Convergence properties and extensive numerical results are presented.1041196121

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both μ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    Nonmonotone Barzilai-Borwein Gradient Algorithm for 1\ell_1-Regularized Nonsmooth Minimization in Compressive Sensing

    Full text link
    This paper is devoted to minimizing the sum of a smooth function and a nonsmooth 1\ell_1-regularized term. This problem as a special cases includes the 1\ell_1-regularized convex minimization problem in signal processing, compressive sensing, machine learning, data mining, etc. However, the non-differentiability of the 1\ell_1-norm causes more challenging especially in large problems encountered in many practical applications. This paper proposes, analyzes, and tests a Barzilai-Borwein gradient algorithm. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the 1\ell_1-norm. Moreover, a nonmonotone line search technique is incorporated to find a suitable stepsize along this direction. The algorithm is easily performed, where the values of the objective function and the gradient of the smooth term are required at per-iteration. Under some conditions, the proposed algorithm is shown to be globally convergent. The limited experiments by using some nonconvex unconstrained problems from CUTEr library with additive 1\ell_1-regularization illustrate that the proposed algorithm performs quite well. Extensive experiments for 1\ell_1-regularized least squares problems in compressive sensing verify that our algorithm compares favorably with several state-of-the-art algorithms which are specifically designed in recent years.Comment: 20 page

    Quasi-Newton-Based Preconditioning and Damped Quasi-Newton Schemes for Nonlinear Conjugate Gradient Methods

    Get PDF
    In this paper, we deal with matrix-free preconditioners for Nonlinear Conjugate Gradient (NCG) methods. In particular, we review proposals based on quasi-Newton updates, and either satisfying the secant equation or a secant-like equation at some of the previous iterates. Conditions are given proving that, in some sense, the proposed preconditioners also approximate the inverse of the Hessian matrix. In particular, the structure of the preconditioners depends both on low-rank updates along with some specific parameters. The low-rank updates are obtained as by-product of NCG iterations. Moreover, we consider the possibility to embed damped techniques within a class of preconditioners based on quasi-Newton updates. Damped methods have proved to be effective to enhance the performance of quasi-Newton updates, in those cases where the Wolfe linesearch conditions are hardly fulfilled. The purpose is to extend the idea behind damped methods also to improve NCG schemes, following a novel line of research in the literature. The results, which summarize an extended numerical experience using large-scale CUTEst problems, is reported, showing that these approaches can considerably improve the performance of NCG methods

    Convergence properties of the Barzilai and Borwein gradient method

    No full text
    In a recent paper, Barzilai and Borwein presented a new choice of steplength for the gradient method. Their choice does not guarantee descent in the objective function and greatly speeds up the convergence of the method. We derive an interesting relationship between any gradient method and the shifted power method. This relationship allows us to establish the convergence of the Barzilai and Borwein method when applied to the problem of minimizing any strictly convex quadratic function (Barzilai and Borwein considered only 2-dimensional problems). Our point of view also allows us to explain the remarkable improvement obtained by using this new choice of steplength. For the two eigenvalues case we present some very interesting convergence rate results. We show that our Q and R-rate of convergence analysis is sharp and we compare it with the Barzilai and Borwein analysis. We derive the preconditioned Barzilai and Borwein method and present preliminary numerical results indicating that it is an effective method, as compared to the preconditioned Conjugate Gradient method, for the numerical solution of some special symmetric positive definite linear systems that arise in the numerical solution of Partial Differential Equations

    Separable Cubic Modeling And A Trust-region Strategy For Unconstrained Minimization With Impact In Global Optimization

    No full text
    A separable cubic model, for smooth unconstrained minimization, is proposed and evaluated. The cubic model uses some novel secant-type choices for the parameters in the cubic terms. A suitable hard-case-free trust-region strategy that takes advantage of the separable cubic modeling is also presented. For the convergence analysis of our specialized trust region strategy we present as a general framework a model (Formula presented.)-order trust region algorithm with variable metric and we prove its convergence to (Formula presented.)-stationary points. Some preliminary numerical examples are also presented to illustrate the tendency of the specialized trust region algorithm, when combined with our cubic modeling, to escape from local minimizers

    Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization

    Get PDF
    We present a new algorithm for solving large-scale unconstrained optimization problems that uses cubic models, matrix-free subspace minimization, and secant-type parameters for defining the cubic terms. We also propose and analyze a specialized trust-region strategy to minimize the cubic model on a properly chosen low-dimensional subspace, which is built at each iteration using the Lanczos process. For the convergence analysis we present, as a general framework, a model trust-region subspace algorithm with variable metric and we establish asymptotic as well as complexity convergence results. Preliminary numerical results, on some test functions and also on the well-known disk packing problem, are presented to illustrate the performance of the proposed scheme when solving large-scale problems169FAPESP – Fundação de Amparo à Pesquisa Do Estado De São Paulo2011/51305-02; 2013/05475-7; 2013/07375-0E-26/111.449/2010-APQ1FAPERJ - Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de JaneiroPRONEX-CNPq/FAPERJCarlos Chagas Filho Foundation for Research Support of the State of Rio de Janeiro (FAPERJ) [E-26/111.449/2010-APQ1]; CEPID-Industrial Mathematics/FAPESP [2011/51305-02]; FAPESPFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) [2013/05475-7, 2013/07375-0]; Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology)Portuguese Foundation for Science and Technology [UID/MAT/00297/2019
    corecore