34 research outputs found

    A descent subgradient method using Mifflin line search for nonsmooth nonconvex optimization

    Full text link
    We propose a descent subgradient algorithm for minimizing a real function, assumed to be locally Lipschitz, but not necessarily smooth or convex. To find an effective descent direction, the Goldstein subdifferential is approximated through an iterative process. The method enjoys a new two-point variant of Mifflin line search in which the subgradients are arbitrary. Thus, the line search procedure is easy to implement. Moreover, in comparison to bundle methods, the quadratic subproblems have a simple structure, and to handle nonconvexity the proposed method requires no algorithmic modification. We study the global convergence of the method and prove that any accumulation point of the generated sequence is Clarke stationary, assuming that the objective ff is weakly upper semismooth. We illustrate the efficiency and effectiveness of the proposed algorithm on a collection of academic and semi-academic test problems

    Global rates of convergence for nonconvex optimization on manifolds

    Full text link
    We consider the minimization of a cost function ff on a manifold MM using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance ε\varepsilon. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of ff to the tangent spaces of MM, both of these algorithms produce points with Riemannian gradient smaller than ε\varepsilon in O(1/ε2)O(1/\varepsilon^2) iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than ε-\varepsilon in O(1/ε3)O(1/\varepsilon^3) iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy ε\varepsilon (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of Rn\mathbb{R}^n, under simpler assumptions.Comment: 33 pages, IMA Journal of Numerical Analysis, 201

    Randomized Algorithms for Nonconvex Nonsmooth Optimization

    Get PDF
    Nonsmooth optimization problems arise in a variety of applications including robust control, robust optimization, eigenvalue optimization, compressed sensing, and decomposition methods for large-scale or complex optimization problems. When convexity is present, such problems are relatively easier to solve. Optimization methods for convex nonsmooth optimization have been studied for decades. For example, bundle methods are a leading technique for convex nonsmooth minimization. However, these and other methods that have been developed for solving convex problems are either inapplicable or can be inefficient when applied to solve nonconvex problems. The motivation of the work in this thesis is to design robust and efficient algorithms for solving nonsmooth optimization problems, particularly when nonconvexity is present.First, we propose an adaptive gradient sampling (AGS) algorithm, which is based on a recently developed technique known as the gradient sampling (GS) algorithm. Our AGS algorithm improves the computational efficiency of GS in critical ways. Then, we propose a BFGS gradient sampling (BFGS-GS) algorithm, which is a hybrid between a standard Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the GS method. Our BFGS-GS algorithm is more efficient than our previously proposed AGS algorithm and also competitive with (and in some ways outperforms) other contemporary solvers for nonsmooth nonconvex optimization. Finally, we propose a few additional extensions of the GS framework---one in which we merge GS ideas with those from bundle methods, one in which we incorporate smoothing techniques in order to minimize potentially non-Lipschitz objective functions, and one in which we tailor GS methods for solving regularization problems. We describe all the proposed algorithms in detail. In addition, for all the algorithm variants, we prove global convergence guarantees under suitable assumptions. Moreover, we perform numerical experiments to illustrate the efficiency of our algorithms. The test problems considered in our experiments include academic test problems as well as practical problems that arise in applications of nonsmooth optimization

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    Techniques d'optimisation non lisse avec des applications en automatique et en mécanique des contacts

    Get PDF
    L'optimisation non lisse est une branche active de programmation non linéaire moderne, où l'objectif et les contraintes sont des fonctions continues mais pas nécessairement différentiables. Les sous-gradients généralisés sont disponibles comme un substitut à l'information dérivée manquante, et sont utilisés dans le cadre des algorithmes de descente pour se rapprocher des solutions optimales locales. Sous des hypothèses réalistes en pratique, nous prouvons des certificats de convergence vers les points optimums locaux ou critiques à partir d'un point de départ arbitraire. Dans cette thèse, nous développons plus particulièrement des techniques d'optimisation non lisse de type faisceaux, où le défi consiste à prouver des certificats de convergence sans hypothèse de convexité. Des résultats satisfaisants sont obtenus pour les deux classes importantes de fonctions non lisses dans des applications, fonctions C1-inférieurement et C1-supérieurement. Nos méthodes sont appliquées à des problèmes de design dans la théorie du système de contrôle et dans la mécanique de contact unilatéral et en particulier, dans les essais mécaniques destructifs pour la délaminage des matériaux composites. Nous montrons comment ces domaines conduisent à des problèmes d'optimisation non lisse typiques, et nous développons des algorithmes de faisceaux appropriés pour traiter ces problèmes avec succèsNonsmooth optimization is an active branch of modern nonlinear programming, where objective and constraints are continuous but not necessarily differentiable functions. Generalized subgradients are available as a substitute for the missing derivative information, and are used within the framework of descent algorithms to approximate local optimal solutions. Under practically realistic hypotheses we prove convergence certificates to local optima or critical points from an arbitrary starting point. In this thesis we develop especially nonsmooth optimization techniques of bundle type, where the challenge is to prove convergence certificates without convexity hypotheses. Satisfactory results are obtained for two important classes of nonsmooth functions in applications, lower- and upper-C1 functions. Our methods are applied to design problems in control system theory and in unilateral contact mechanics and in particular, in destructive mechanical testing for delamination of composite materials. We show how these fields lead to typical nonsmooth optimization problems, and we develop bundle algorithms suited to address these problems successfully
    corecore