1,088 research outputs found

    Nonsmooth trust-region algorithm with applications to robust stability of uncertain systems

    Full text link
    We propose a bundle trust-region algorithm to minimize locally Lipschitz functions which are potentially nonsmooth and nonconvex. We prove global convergence of our method and show by way of an example that the classical convergence argument in trust-region methods based on the Cauchy point fails in the nonsmooth setting. Our method is tested experimentally on three problems in automatic control.Comment: 26 pages, 1 figure, 3 table

    A Self-Correcting Variable-Metric Algorithm Framework for Nonsmooth Optimization

    Full text link
    An algorithm framework is proposed for minimizing nonsmooth functions. The framework is variable-metric in that, in each iteration, a step is computed using a symmetric positive definite matrix whose value is updated as in a quasi-Newton scheme. However, unlike previously proposed variable-metric algorithms for minimizing nonsmooth functions, the framework exploits self-correcting properties made possible through BFGS-type updating. In so doing, the framework does not overly restrict the manner in which the step computation matrices are updated, yet the scheme is controlled well enough that global convergence guarantees can be established. The results of numerical experiments for a few algorithms are presented to demonstrate the self-correcting behaviors that are guaranteed by the framework

    Minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

    Full text link
    An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most O(log(ϵ)ϵ2)O(|\log(\epsilon)|\,\epsilon^{-2}) evaluations of the problem's functions and their derivatives for finding an ϵ\epsilon-approximate first-order stationary point. This complexity bound therefore generalizes that provided by [Bellavia, Gurioli, Morini and Toint, 2018] for inexact methods for smooth nonconvex problems, and is within a factor log(ϵ)|\log(\epsilon)| of the optimal bound known for smooth and nonsmooth nonconvex minimization with exact evaluations. A practically more restrictive variant of the algorithm with worst-case complexity O(log(ϵ)+ϵ2)O(|\log(\epsilon)|+\epsilon^{-2}) is also presented.Comment: 19 page

    A version of bundle method with linear programming

    Full text link
    Bundle methods have been intensively studied for solving both convex and nonconvex optimization problems. In most of the bundle methods developed thus far, at least one quadratic programming (QP) subproblem needs to be solved in each iteration. In this paper, we exploit the feasibility of developing a bundle algorithm that only solves linear subproblems. We start from minimization of a convex function and show that the sequence of major iterations converge to a minimizer. For nonconvex functions we consider functions that are locally Lipschitz continuous and prox-regular on a bounded level set, and minimize the cutting-plane model over a trust region with infinity norm. The para-convexity of such functions allows us to use the locally convexified model and its convexity properties. Under some conditions and assumptions, we study the convergence of the proposed algorithm through the outer semicontinuity of the proximal mapping. Encouraging results of preliminary numerical experiments on standard test sets are provided.Comment: 28 page

    A Fast Gradient and Function Sampling Method for Finite Max-Functions

    Full text link
    This paper tackles the unconstrained minimization of a class of nonsmooth and nonconvex functions that can be written as finite max-functions. A gradient and function-based sampling method is proposed which, under special circumstances, either moves superlinearly to a minimizer of the problem of interest or superlinearly improves the optimality certificate. Global and local convergence analysis are presented, as well as illustrative examples that corroborate and elucidate the obtained theoretical results

    A proximal method for composite minimization

    Full text link
    We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising

    A Smoothing SQP Framework for a Class of Composite LqL_q Minimization over Polyhedron

    Full text link
    The composite Lq (0<q<1)L_q~(0<q<1) minimization problem over a general polyhedron has received various applications in machine learning, wireless communications, image restoration, signal reconstruction, etc. This paper aims to provide a theoretical study on this problem. Firstly, we show that for any fixed 0<q<10<q<1, finding the global minimizer of the problem, even its unconstrained counterpart, is strongly NP-hard. Secondly, we derive Karush-Kuhn-Tucker (KKT) optimality conditions for local minimizers of the problem. Thirdly, we propose a smoothing sequential quadratic programming framework for solving this problem. The framework requires a (approximate) solution of a convex quadratic program at each iteration. Finally, we analyze the worst-case iteration complexity of the framework for returning an ϵ\epsilon-KKT point; i.e., a feasible point that satisfies a perturbed version of the derived KKT optimality conditions. To the best of our knowledge, the proposed framework is the first one with a worst-case iteration complexity guarantee for solving composite LqL_q minimization over a general polyhedron

    Proximal Gradient Method for Nonsmooth Optimization over the Stiefel Manifold

    Full text link
    We consider optimization problems over the Stiefel manifold whose objective function is the summation of a smooth function and a nonsmooth function. Existing methods for solving this kind of problems can be classified into three classes. Algorithms in the first class rely on information of the subgradients of the objective function and thus tend to converge slowly in practice. Algorithms in the second class are proximal point algorithms, which involve subproblems that can be as difficult as the original problem. Algorithms in the third class are based on operator-splitting techniques, but they usually lack rigorous convergence guarantees. In this paper, we propose a retraction-based proximal gradient method for solving this class of problems. We prove that the proposed method globally converges to a stationary point. Iteration complexity for obtaining an ϵ\epsilon-stationary solution is also analyzed. Numerical results on solving sparse PCA and compressed modes problems are reported to demonstrate the advantages of the proposed method

    Derivative-free optimization methods

    Full text link
    In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black-box or simulation oracle that does not provide derivative information. Such settings necessitate the use of methods for derivative-free, or zeroth-order, optimization. We provide a review and perspectives on developments in these methods, with an emphasis on highlighting recent developments and on unifying treatment of such problems in the non-linear optimization and machine learning literature. We categorize methods based on assumed properties of the black-box functions, as well as features of the methods. We first overview the primary setting of deterministic methods applied to unconstrained, non-convex optimization problems where the objective function is defined by a deterministic black-box oracle. We then discuss developments in randomized methods, methods that assume some additional structure about the objective (including convexity, separability and general non-smooth compositions), methods for problems where the output of the black-box oracle is stochastic, and methods for handling different types of constraints

    Convergence Analysis of a Proximal Point Algorithm for Minimizing Differences of Functions

    Full text link
    Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka - \L ojasiewicz property
    corecore