510 research outputs found

    An inexact conic bundle variant suited to column generation

    Get PDF
    Final version to appear in Mathematical Programming Available in www.springerlink.com DOI 10.1007/s10107-007-0187-4We give a bundle method for constrained convex optimization. Instead of using penalty functions, it shifts iterates towards feasibility, by way of a Slater point, assumed to be known. Besides, the method accepts an oracle delivering function and subgradient values with unknown accuracy. Our approach is motivated by a number of applications in column generation, in which constraints are positively homogeneous -- so that 0 is a natural Slater point -- and an exact oracle may be time consuming. Finally, our convergence analysis employs arguments which have been little used so far in the bundle community. The method is illustrated on a number of cutting-stock problems

    Standard Bundle Methods: Untrusted Models and Duality

    Get PDF
    We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function

    Inexact Stabilized Benders' Decomposition Approaches, with Application to Chance-Constrained Problems with Finite Support

    Get PDF
    We explore modifications of the standard cutting-plane approach for minimizing a convex nondifferentiable function, given by an oracle, over a combinatorial set, which is the basis of the celebrated (generalized) Benders' decomposition approach. Specifically, we combine stabilization—in two ways: via a trust region in the L1 norm, or via a level constraint—and inexact function computation (solution of the subproblems). Managing both features simultaneously requires a nontrivial convergence analysis; we provide it under very weak assumptions on the handling of the two parameters (target and accuracy) controlling the informative on-demand inexact oracle corresponding to the subproblem, strengthening earlier know results. This yields new versions of Benders' decomposition, whose numerical performance are assessed on a class of hybrid robust and chance-constrained problems that involve a random variable with an underlying discrete distribution, are convex in the decision variable, but have neither separable nor linear probabilistic constraints. The numerical results show that the approach has potential, especially for instances that are difficult to solve with standard techniques

    A decomposition procedure based on approximate newton directions

    Get PDF
    The efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable) preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the resulting procedures with direct methods, as well as to study the impact of different preconditioner choices

    A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

    Get PDF
    The efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable) preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the resulting procedures with direct methods, as well as to study the impact of different preconditioner choices.

    Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging

    Full text link
    Many graphics and vision problems can be expressed as non-linear least squares optimizations of objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance on modern GPUs in interactive applications. In this work, we propose a new language, Opt (available under http://optlang.org), for writing these objective functions over image- or graph-structured unknowns concisely and at a high level. Our compiler automatically transforms these specifications into state-of-the-art GPU solvers based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate different variations of the solver, so users can easily explore tradeoffs in numerical precision, matrix-free methods, and solver approaches. In our results, we implement a variety of real-world graphics and vision applications. Their energy functions are expressible in tens of lines of code, and produce highly-optimized GPU solver implementations. These solver have performance competitive with the best published hand-tuned, application-specific GPU solvers, and orders of magnitude beyond a general-purpose auto-generated solver

    Regularized optimization methods for convex MINLP problems

    Get PDF
    We propose regularized cutting-plane methods for solving mixed-integer nonlinear programming problems with nonsmooth convex objective and constraint functions. The given methods iteratively search for trial points in certain localizer sets, constructed by employing linearizations of the involved functions. New trial points can be chosen in several ways; for instance, by minimizing a regularized cutting-plane model if functions are costly. When dealing with hard-to-evaluate functions, the goal is to solve the optimization problem by performing as few function evaluations as possible. Numerical experiments comparing the proposed algorithms with classical methods in this area show the effectiveness of our approach

    Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

    Full text link
    We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward--Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in signal/image processing and machine learning

    A decomposition procedure based on approximate Newton directions

    Get PDF
    The original publication is available at www.springerlink.comThe efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable) preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the resulting procedures with direct methods.Publicad
    corecore