41 research outputs found

    Regularized optimization methods for convex MINLP problems

    Get PDF
    We propose regularized cutting-plane methods for solving mixed-integer nonlinear programming problems with nonsmooth convex objective and constraint functions. The given methods iteratively search for trial points in certain localizer sets, constructed by employing linearizations of the involved functions. New trial points can be chosen in several ways; for instance, by minimizing a regularized cutting-plane model if functions are costly. When dealing with hard-to-evaluate functions, the goal is to solve the optimization problem by performing as few function evaluations as possible. Numerical experiments comparing the proposed algorithms with classical methods in this area show the effectiveness of our approach

    Standard Bundle Methods: Untrusted Models and Duality

    Get PDF
    We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function

    A unified analysis of a class of proximal bundle methods for solving hybrid convex composite optimization problems

    Full text link
    This paper presents a proximal bundle (PB) framework based on a generic bundle update scheme for solving the hybrid convex composite optimization (HCCO) problem and establishes a common iteration-complexity bound for any variant belonging to it. As a consequence, iteration-complexity bounds for three PB variants based on different bundle update schemes are obtained in the HCCO context for the first time and in a unified manner. While two of the PB variants are universal (i.e., their implementations do not require parameters associated with the HCCO instance), the other newly (as far as the authors are aware of) proposed one is not but has the advantage that it generates simple, namely one-cut, bundle models. The paper also presents a universal adaptive PB variant (which is not necessarily an instance of the framework) based on one-cut models and shows that its iteration-complexity is the same as the two aforementioned universal PB variants.Comment: 31 page

    Optimal Convergence Rates for the Proximal Bundle Method

    Full text link
    We study convergence rates of the classic proximal bundle method for a variety of nonsmooth convex optimization problems. We show that, without any modification, this algorithm adapts to converge faster in the presence of smoothness or a H\"older growth condition. Our analysis reveals that with a constant stepsize, the bundle method is adaptive, yet it exhibits suboptimal convergence rates. We overcome this shortcoming by proposing nonconstant stepsize schemes with optimal rates. These schemes use function information such as growth constants, which might be prohibitive in practice. We provide a parallelizable variant of the bundle method that can be applied without prior knowledge of function parameters while maintaining near-optimal rates. The practical impact of this scheme is limited since we incur a (parallelizable) log factor in the complexity. These results improve on the scarce existing convergence rates and provide a unified analysis approach across problem settings and algorithmic details. Numerical experiments support our findings

    Uncontrolled inexact information within bundle methods

    Get PDF
    International audienceWe consider convex nonsmooth optimization problems where additional information with uncontrolled accuracy is readily available. It is often the case when the objective function is itself the output of an optimization solver, as for large-scale energy optimization problems tackled by decomposition. In this paper, we study how to incorporate the uncontrolled linearizations into (proximal and level) bundle algorithms in view of generating better iterates and possibly accelerating the methods. We provide the convergence analysis of the algorithms using uncontrolled linearizations, and we present numerical illustrations showing they indeed speed up resolution of two stochastic optimization problems coming from energy optimization (two-stage linear problems and chance-constrained problems in reservoir management)

    An Oracle-Structured Bundle Method for Distributed Optimization

    Full text link
    We consider the problem of minimizing a function that is a sum of convex agent functions plus a convex common public function that couples them. The agent functions can only be accessed via a subgradient oracle; the public function is assumed to be structured and expressible in a domain specific language (DSL) for convex optimization. We focus on the case when the evaluation of the agent oracles can require significant effort, which justifies the use of solution methods that carry out significant computation in each iteration. We propose a cutting-plane or bundle-type method for the distributed optimization problem, which has a number of advantages over other methods that are compatible with the access methods, such as proximal subgradient methods: it has very few parameters that need to be tuned; it often produces a reasonable approximate solution in just a few tens of iterations; and it tolerates agent failures. This paper is accompanied by an open source package that implements the proposed method, available at \url{https://github.com/cvxgrp/OSBDO}

    The method of codifferential descent for convex and global piecewise affine optimization

    Full text link
    The class of nonsmooth codifferentiable functions was introduced by professor V.F.~Demyanov in the late 1980s. He also proposed a method for minimizing these functions called the method of codifferential descent (MCD). However, until now almost no theoretical results on the performance of this method on particular classes of nonsmooth optimization problems were known. In the first part of the paper, we study the performance of the method of codifferential descent on a class of nonsmooth convex functions satisfying some regularity assumptions, which in the smooth case are reduced to the Lipschitz continuity of the gradient. We prove that in this case the MCD has the iteration complexity bound O(1/ε)\mathcal{O}(1 / \varepsilon). In the second part of the paper we obtain new global optimality conditions for piecewise affine functions in terms of codifferentials. With the use of these conditions we propose a modification of the MCD for minimizing piecewise affine functions (called the method of global codifferential descent) that does not use line search, and discards those "pieces" of the objective functions that are no longer useful for the optimization process. Then we prove that the MCD as well as its modification proposed in the article find a point of global minimum of a nonconvex piecewise affine function in a finite number of steps

    A Cutting Plane and Level Stabilization Bundle Method with Inexact Data for Minimizing Nonsmooth Nonconvex Functions

    Get PDF
    Under the condition that the values of the objective function and its subgradient are computed approximately, we introduce a cutting plane and level bundle method for minimizing nonsmooth nonconvex functions by combining cutting plane method with the ideas of proximity control and level constraint. The proposed algorithm is based on the construction of both a lower and an upper polyhedral approximation model to the objective function and calculates new iteration points by solving a subproblem in which the model is employed not only in the objective function but also in the constraints. Compared with other proximal bundle methods, the new variant updates the lower bound of the optimal value, providing an additional useful stopping test based on the optimality gap. Another merit is that our algorithm makes a distinction between affine pieces that exhibit a convex or a concave behavior relative to the current iterate. Convergence to some kind of stationarity point is proved under some looser conditions
    corecore