38 research outputs found

    New convergence results for the scaled gradient projection method

    Get PDF
    The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al. in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, an extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak, though very general, convergence theorem is provided, establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the O(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view

    Hessian barrier algorithms for linearly constrained optimization problems

    Get PDF
    In this paper, we propose an interior-point method for linearly constrained optimization problems (possibly nonconvex). The method - which we call the Hessian barrier algorithm (HBA) - combines a forward Euler discretization of Hessian Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent (MD), and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a non-degeneracy condition, the algorithm converges to the problem's set of critical points; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is O(1/kρ)\mathcal{O}(1/k^\rho) for some ρ(0,1]\rho\in(0,1] that depends only on the choice of kernel function (i.e., not on the problem's primitives). These theoretical results are validated by numerical experiments in standard non-convex test functions and large-scale traffic assignment problems.Comment: 27 pages, 6 figure

    Variable metric inexact line-search based methods for nonsmooth optimization

    Get PDF
    We develop a new proximal-gradient method for minimizing the sum of a differentiable, possibly nonconvex, function plus a convex, possibly non differentiable, function. The key features of the proposed method are the definition of a suitable descent direction, based on the proximal operator associated to the convex part of the objective function, and an Armijo-like rule to determine the step size along this direction ensuring the sufficient decrease of the objective function. In this frame, we especially address the possibility of adopting a metric which may change at each iteration and an inexact computation of the proximal point defining the descent direction. For the more general nonconvex case, we prove that all limit points of the iterates sequence are stationary, while for convex objective functions we prove the convergence of the whole sequence to a minimizer, under the assumption that a minimizer exists. In the latter case, assuming also that the gradient of the smooth part of the objective function is Lipschitz, we also give a convergence rate estimate, showing the O(1/k) complexity with respect to the function values. We also discuss verifiable sufficient conditions for the inexact proximal point and we present the results of a numerical experience on a convex total variation based image restoration problem, showing that the proposed approach is competitive with another state-of-the-art method

    Solving the dual subproblem of the Method of Moving Asymptotes using a trust-region scheme

    Get PDF
    An alternative strategy to solve the subproblems of the Method of Moving Asymptotes (MMA) is presented, based on a trust-region scheme applied to the dual of the MMA subproblem. At each iteration, the objective function of the dual problem is approximated by a regularized spectral model. A globally convergent modification to the MMA is also suggested, in which the conservative condition is relaxed by means of a summable controlled forcing sequence. Another modification to the MMA previously proposed by the authors [Optim. Methods Softw., 25 (2010), pp. 883-893] is recalled to be used in the numerical tests. This modification is based on the spectral parameter for updating the MMA models, so as to improve their quality. The performed numerical experiments confirm the efficiency of the indicated modifications, especially when jointly combined.151170Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq

    Hessian barrier algorithms for linearly constrained optimization problems

    Get PDF
    International audienceIn this paper, we propose an interior-point method for linearly constrained-and possibly nonconvex-optimization problems. The method-which we call the Hessian barrier algorithm (HBA)-combines a forward Euler discretization of Hessian-Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent, and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a nondegeneracy condition, the algorithm converges to the problem's critical set; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is O(1/kρ)O(1/k^\rho) for some ρ(0,1]\rho \in (0, 1] that depends only on the choice of kernel function (i.e., not on the problem's primi-tives). These theoretical results are validated by numerical experiments in standard nonconvex test functions and large-scale traffic assignment problems

    A smoothing Newton method for minimizing a sum of Euclidean norms

    Get PDF
    2000-2001 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    New bundle methods and U-Lagrangian for generic nonsmooth optimization

    Get PDF
    Nonsmooth optimization consists of minimizing a continuous function by systematically choosing iterative points from the feasible set via the computation of function values and generalized gradients (called subgradients). Broadly speaking, this thesis contains two research themes: nonsmooth optimization algorithms and theories about the substructure of special nonsmooth functions. Specifically, in terms of algorithms, we develop new bundle methods and bundle trust region methods for generic nonsmooth optimization. For theoretical work, we generalize the notion of U-Lagrangian and investigate its connections with some subsmooth structures. This PhD project develops trust region methods for generic nonsmooth optimization. It assumes the functions are Lipschitz continuous and the optimization problem is not necessarily convex. Currently the project also assumes the objective function is prox-regular but no structural information is given. Trust region methods create a local model of the problem in a neighborhood of the iteration point (called the `Trust Region'). They minimize the model over the Trust Region and consider the minimizer as a trial point for next iteration. If the model is an appropriate approximation of the objective function then the trial point is expected to generate function reduction. The model problem is usually easy to solve. Therefore by comparing the reduction of the model's value and that of the real problem, trust region methods adjust the radius of the trust region to continue to obtain reduction by solving model problems. At the end of this project, it is clear that (1) It is possible to develop a pure bundle method with linear subproblems and without trust region update for convex optimization problems; such method converges to minimizers if it generates an infinite sequence of serious steps; otherwise, it can be shown that the method generates a sequence of minor updates and the last serious step is a minimizer. First, this PhD project develops a bundle trust region algorithm with linear model and linear subproblem for minimizing a prox-regular and Lipschitz function. It adopts a convexification technique from the redistributed bundle method. Global convergence of the algorithm is established in the sense that the sequence of iterations converges to the fixed point of the proximal-point mapping given that convexification is successful. Preliminary numerical tests on standard academic nonsmooth problems show that the algorithm is comparable to bundle methods with quadratic subproblem. Second, following the philosophy behind bundle method of making full use of the previous information of the iteration process and obtaining a flexible understanding of the function structure, the project revises the algorithm developed in the first part by applying the nonmonotone trust region method.We study the performance of numerical implementation and successively refine the algorithm in an effort to improve its practical performance. Such revisions include allowing the convexification parameter to possibly decrease and the algorithm to restart after a finite process determined by various heuristics. The second theme of this project is about the theories of nonsmooth analysis, focusing on U-Lagrangian. When restricted to a subspace, a nonsmooth function can be differentiable within this space. It is known that for a nonsmooth convex function, at a point, the Euclidean space can be decomposed into two subspaces: U, over which a special Lagrangian (called the U-Lagrangian) can be defined and has nice smooth properties and V space, the orthogonal complement subspace of the U space. In this thesis we generalize the definition of UV-decomposition and U-Lagrangian to the context of nonconvex functions, specifically that of a prox-regular function. Similar work in the literature includes a quadratic sub-Lagrangian. It is our interest to study the feasibility of a linear localized U-Lagrangian. We also study the connections of the new U-Lagrangian and other subsmooth structures including fast tracks and partial smooth functions. This part of the project tries to provide answers to the following questions: (1) based on a generalized UV-decomposition, can we develop a linear U-Lagrangian of a prox-regular function that maintains prox-regularity? (2) through the new U-Lagrangian can we show that partial smoothness and fast tracks are equivalent under prox-regularity? At the end of this project, it is clear that for a function f that is properly prox-regular at a point x*, a new linear localized U-Lagrangian can be defined and its value at 0 coincides with f(x*); under some conditions, it can be proved that the U-Lagrangian is also prox-regular at 0; moreover partial smoothness and fast tracks are equivalent under prox-regularity and other mild conditions

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
    corecore