6 research outputs found

    The method of codifferential descent for convex and global piecewise affine optimization

    Full text link
    The class of nonsmooth codifferentiable functions was introduced by professor V.F.~Demyanov in the late 1980s. He also proposed a method for minimizing these functions called the method of codifferential descent (MCD). However, until now almost no theoretical results on the performance of this method on particular classes of nonsmooth optimization problems were known. In the first part of the paper, we study the performance of the method of codifferential descent on a class of nonsmooth convex functions satisfying some regularity assumptions, which in the smooth case are reduced to the Lipschitz continuity of the gradient. We prove that in this case the MCD has the iteration complexity bound O(1/ε)\mathcal{O}(1 / \varepsilon). In the second part of the paper we obtain new global optimality conditions for piecewise affine functions in terms of codifferentials. With the use of these conditions we propose a modification of the MCD for minimizing piecewise affine functions (called the method of global codifferential descent) that does not use line search, and discards those "pieces" of the objective functions that are no longer useful for the optimization process. Then we prove that the MCD as well as its modification proposed in the article find a point of global minimum of a nonconvex piecewise affine function in a finite number of steps

    Adaptive exact penalty DC algorithms for nonsmooth DC optimization problems with equality and inequality constraints

    Full text link
    We propose and study two DC (difference of convex functions) algorithms based on exact penalty functions for solving nonsmooth DC optimization problems with nonsmooth DC equality and inequality constraints. Both methods employ adaptive penalty updating strategies to improve their performance. The first method is based on exact penalty functions with individual penalty parameter for each constraint (i.e. multidimensional penalty parameter) and utilizes a primal-dual approach to penalty updates. The second method is based on the so-called steering exact penalty methodology and relies on solving some auxiliary convex subproblems to determine a suitable value of the penalty parameter. We present a detailed convergence analysis of both methods and give several simple numerical examples highlighting peculiarites of two different penalty updating strategies studied in this paper

    DC Semidefinite Programming and Cone Constrained DC Optimization

    Full text link
    In the first part of this paper we discuss possible extensions of the main ideas and results of constrained DC optimization to the case of nonlinear semidefinite programming problems (i.e. problems with matrix constraints). To this end, we analyse two different approaches to the definition of DC matrix-valued functions (namely, order-theoretic and componentwise), study some properties of convex and DC matrix-valued functions and demonstrate how to compute DC decompositions of some nonlinear semidefinite constraints appearing in applications. We also compute a DC decomposition of the maximal eigenvalue of a DC matrix-valued function, which can be used to reformulate DC semidefinite constraints as DC inequality constrains. In the second part of the paper, we develop a general theory of cone constrained DC optimization problems. Namely, we obtain local optimality conditions for such problems and study an extension of the DC algorithm (the convex-concave procedure) to the case of general cone constrained DC optimization problems. We analyse a global convergence of this method and present a detailed study of a version of the DCA utilising exact penalty functions. In particular, we provide two types of sufficient conditions for the convergence of this method to a feasible and critical point of a cone constrained DC optimization problem from an infeasible starting point

    Aggregate codifferential method for nonsmooth DC optimization

    No full text
    A new algorithm is developed based on the concept of codifferential for minimizing the difference of convex nonsmooth functions. Since the computation of the whole codifferential is not always possible, we use a fixed number of elements from the codifferential to compute the search directions. The convergence of the proposed algorithm is proved. The efficiency of the algorithm is demonstrated by comparing it with the subgradient, the truncated codifferential and the proximal bundle methods using nonsmooth optimization test problems

    Aggregate codifferential method for nonsmooth DC optimization

    No full text
    corecore