25 research outputs found

    A nonmonotone trust-region method of conic model for unconstrained optimization

    Get PDF
    AbstractIn this paper, we present a nonmonotone trust-region method of conic model for unconstrained optimization. The new method combines a new trust-region subproblem of conic model proposed in [Y. Ji, S.J. Qu, Y.J. Wang, H.M. Li, A conic trust-region method for optimization with nonlinear equality and inequality 4 constrains via active-set strategy, Appl. Math. Comput. 183 (2006) 217–231] with a nonmonotone technique for solving unconstrained optimization. The local and global convergence properties are proved under reasonable assumptions. Numerical experiments are conducted to compare this method with the method of [Y. Ji, S.J. Qu, Y.J. Wang, H.M. Li, A conic trust-region method for optimization with nonlinear equality and inequality 4 constrains via active-set strategy, Appl. Math. Comput. 183 (2006) 217–231]

    The recent development of non-monotone trust region methods

    Get PDF
    Abstract: Trust region methods are a class of numerical methods for optimization. They compute a trial step by solving a trust region sub-problem where a model function is minimized within a trust region. In this paper, we review recent results on non-monotone trust region methods for unconstrained optimization problems. Generally, non-monotone trust region algorithms with non-monotone technique are more effective than the traditional ones, especially when coping with some extreme nonlinear optimization problems. Results on trust region sub-problems and regularization methods are also discussed

    A Nonmonotone Adaptive Trust Region Method Based on Conic Model for Unconstrained Optimization

    Get PDF

    New bundle methods and U-Lagrangian for generic nonsmooth optimization

    Get PDF
    Nonsmooth optimization consists of minimizing a continuous function by systematically choosing iterative points from the feasible set via the computation of function values and generalized gradients (called subgradients). Broadly speaking, this thesis contains two research themes: nonsmooth optimization algorithms and theories about the substructure of special nonsmooth functions. Specifically, in terms of algorithms, we develop new bundle methods and bundle trust region methods for generic nonsmooth optimization. For theoretical work, we generalize the notion of U-Lagrangian and investigate its connections with some subsmooth structures. This PhD project develops trust region methods for generic nonsmooth optimization. It assumes the functions are Lipschitz continuous and the optimization problem is not necessarily convex. Currently the project also assumes the objective function is prox-regular but no structural information is given. Trust region methods create a local model of the problem in a neighborhood of the iteration point (called the `Trust Region'). They minimize the model over the Trust Region and consider the minimizer as a trial point for next iteration. If the model is an appropriate approximation of the objective function then the trial point is expected to generate function reduction. The model problem is usually easy to solve. Therefore by comparing the reduction of the model's value and that of the real problem, trust region methods adjust the radius of the trust region to continue to obtain reduction by solving model problems. At the end of this project, it is clear that (1) It is possible to develop a pure bundle method with linear subproblems and without trust region update for convex optimization problems; such method converges to minimizers if it generates an infinite sequence of serious steps; otherwise, it can be shown that the method generates a sequence of minor updates and the last serious step is a minimizer. First, this PhD project develops a bundle trust region algorithm with linear model and linear subproblem for minimizing a prox-regular and Lipschitz function. It adopts a convexification technique from the redistributed bundle method. Global convergence of the algorithm is established in the sense that the sequence of iterations converges to the fixed point of the proximal-point mapping given that convexification is successful. Preliminary numerical tests on standard academic nonsmooth problems show that the algorithm is comparable to bundle methods with quadratic subproblem. Second, following the philosophy behind bundle method of making full use of the previous information of the iteration process and obtaining a flexible understanding of the function structure, the project revises the algorithm developed in the first part by applying the nonmonotone trust region method.We study the performance of numerical implementation and successively refine the algorithm in an effort to improve its practical performance. Such revisions include allowing the convexification parameter to possibly decrease and the algorithm to restart after a finite process determined by various heuristics. The second theme of this project is about the theories of nonsmooth analysis, focusing on U-Lagrangian. When restricted to a subspace, a nonsmooth function can be differentiable within this space. It is known that for a nonsmooth convex function, at a point, the Euclidean space can be decomposed into two subspaces: U, over which a special Lagrangian (called the U-Lagrangian) can be defined and has nice smooth properties and V space, the orthogonal complement subspace of the U space. In this thesis we generalize the definition of UV-decomposition and U-Lagrangian to the context of nonconvex functions, specifically that of a prox-regular function. Similar work in the literature includes a quadratic sub-Lagrangian. It is our interest to study the feasibility of a linear localized U-Lagrangian. We also study the connections of the new U-Lagrangian and other subsmooth structures including fast tracks and partial smooth functions. This part of the project tries to provide answers to the following questions: (1) based on a generalized UV-decomposition, can we develop a linear U-Lagrangian of a prox-regular function that maintains prox-regularity? (2) through the new U-Lagrangian can we show that partial smoothness and fast tracks are equivalent under prox-regularity? At the end of this project, it is clear that for a function f that is properly prox-regular at a point x*, a new linear localized U-Lagrangian can be defined and its value at 0 coincides with f(x*); under some conditions, it can be proved that the U-Lagrangian is also prox-regular at 0; moreover partial smoothness and fast tracks are equivalent under prox-regularity and other mild conditions

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization

    Full text link
    We propose a novel trust region method for solving a class of nonsmooth and nonconvex composite-type optimization problems. The approach embeds inexact semismooth Newton steps for finding zeros of a normal map-based stationarity measure for the problem in a trust region framework. Based on a new merit function and acceptance mechanism, global convergence and transition to fast local q-superlinear convergence are established under standard conditions. In addition, we verify that the proposed trust region globalization is compatible with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence results. We further derive new normal map-based representations of the associated second-order optimality conditions that have direct connections to the local assumptions required for fast convergence. Finally, we study the behavior of our algorithm when the Hessian matrix of the smooth part of the objective function is approximated by BFGS updates. We successfully link the KL theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type condition to show superlinear convergence of the quasi-Newton version of our method. Numerical experiments on sparse logistic regression and image compression illustrate the efficiency of the proposed algorithm.Comment: 56 page

    Hessian barrier algorithms for linearly constrained optimization problems

    Get PDF
    In this paper, we propose an interior-point method for linearly constrained optimization problems (possibly nonconvex). The method - which we call the Hessian barrier algorithm (HBA) - combines a forward Euler discretization of Hessian Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent (MD), and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a non-degeneracy condition, the algorithm converges to the problem's set of critical points; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is O(1/kρ)\mathcal{O}(1/k^\rho) for some ρ(0,1]\rho\in(0,1] that depends only on the choice of kernel function (i.e., not on the problem's primitives). These theoretical results are validated by numerical experiments in standard non-convex test functions and large-scale traffic assignment problems.Comment: 27 pages, 6 figure
    corecore