10,439 research outputs found

    A Parametric Simplex Algorithm for Linear Vector Optimization Problems

    Get PDF
    In this paper, a parametric simplex algorithm for solving linear vector optimization problems (LVOPs) is presented. This algorithm can be seen as a variant of the multi-objective simplex (Evans-Steuer) algorithm [12]. Different from it, the proposed algorithm works in the parameter space and does not aim to find the set of all efficient solutions. Instead, it finds a solution in the sense of Loehne [16], that is, it finds a subset of efficient solutions that allows to generate the whole frontier. In that sense, it can also be seen as a generalization of the parametric self-dual simplex algorithm, which originally is designed for solving single objective linear optimization problems, and is modified to solve two objective bounded LVOPs with the positive orthant as the ordering cone in Ruszczynski and Vanderbei [21]. The algorithm proposed here works for any dimension, any solid pointed polyhedral ordering cone C and for bounded as well as unbounded problems. Numerical results are provided to compare the proposed algorithm with an objective space based LVOP algorithm (Benson algorithm in [13]), that also provides a solution in the sense of [16], and with Evans-Steuer algorithm [12]. The results show that for non-degenerate problems the proposed algorithm outperforms Benson algorithm and is on par with Evan-Steuer algorithm. For highly degenerate problems Benson's algorithm [13] excels the simplex-type algorithms; however, the parametric simplex algorithm is for these problems computationally much more efficient than Evans-Steuer algorithm.Comment: 27 pages, 4 figures, 5 table

    Polynomial Optimization with Applications to Stability Analysis and Control - Alternatives to Sum of Squares

    Full text link
    In this paper, we explore the merits of various algorithms for polynomial optimization problems, focusing on alternatives to sum of squares programming. While we refer to advantages and disadvantages of Quantifier Elimination, Reformulation Linear Techniques, Blossoming and Groebner basis methods, our main focus is on algorithms defined by Polya's theorem, Bernstein's theorem and Handelman's theorem. We first formulate polynomial optimization problems as verifying the feasibility of semi-algebraic sets. Then, we discuss how Polya's algorithm, Bernstein's algorithm and Handelman's algorithm reduce the intractable problem of feasibility of semi-algebraic sets to linear and/or semi-definite programming. We apply these algorithms to different problems in robust stability analysis and stability of nonlinear dynamical systems. As one contribution of this paper, we apply Polya's algorithm to the problem of H_infinity control of systems with parametric uncertainty. Numerical examples are provided to compare the accuracy of these algorithms with other polynomial optimization algorithms in the literature.Comment: AIMS Journal of Discrete and Continuous Dynamical Systems - Series

    An Exponential Lower Bound on the Complexity of Regularization Paths

    Full text link
    For a variety of regularized optimization problems in machine learning, algorithms computing the entire solution path have been developed recently. Most of these methods are quadratic programs that are parameterized by a single parameter, as for example the Support Vector Machine (SVM). Solution path algorithms do not only compute the solution for one particular value of the regularization parameter but the entire path of solutions, making the selection of an optimal parameter much easier. It has been assumed that these piecewise linear solution paths have only linear complexity, i.e. linearly many bends. We prove that for the support vector machine this complexity can be exponential in the number of training points in the worst case. More strongly, we construct a single instance of n input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) = \Theta(2^d) many distinct subsets of support vectors occur as the regularization parameter changes.Comment: Journal version, 28 Pages, 5 Figure

    Learning by mirror averaging

    Get PDF
    Given a finite collection of estimators or classifiers, we study the problem of model selection type aggregation, that is, we construct a new estimator or classifier, called aggregate, which is nearly as good as the best among them with respect to a given risk criterion. We define our aggregate by a simple recursive procedure which solves an auxiliary stochastic linear programming problem related to the original nonlinear one and constitutes a special case of the mirror averaging algorithm. We show that the aggregate satisfies sharp oracle inequalities under some general assumptions. The results are applied to several problems including regression, classification and density estimation.Comment: Published in at http://dx.doi.org/10.1214/07-AOS546 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Lagrangean decomposition for large-scale two-stage stochastic mixed 0-1 problems

    Get PDF
    In this paper we study solution methods for solving the dual problem corresponding to the Lagrangean Decomposition of two stage stochastic mixed 0-1 models. We represent the two stage stochastic mixed 0-1 problem by a splitting variable representation of the deterministic equivalent model, where 0-1 and continuous variables appear at any stage. Lagrangean Decomposition is proposed for satisfying both the integrality constraints for the 0-1 variables and the non-anticipativity constraints. We compare the performance of four iterative algorithms based on dual Lagrangean Decomposition schemes, as the Subgradient method, the Volume algorithm, the Progressive Hedging algorithm and the Dynamic Constrained Cutting Plane scheme. We test the conditions and properties of convergence for medium and large-scale dimension stochastic problems. Computational results are reported.Progressive Hedging algorithm, volume algorithm, Lagrangean decomposition, subgradient method

    OSQP: An Operator Splitting Solver for Quadratic Programs

    Full text link
    We present a general-purpose solver for convex quadratic programs based on the alternating direction method of multipliers, employing a novel operator splitting technique that requires the solution of a quasi-definite linear system with the same coefficient matrix at almost every iteration. Our algorithm is very robust, placing no requirements on the problem data such as positive definiteness of the objective function or linear independence of the constraint functions. It can be configured to be division-free once an initial matrix factorization is carried out, making it suitable for real-time applications in embedded systems. In addition, our technique is the first operator splitting method for quadratic programs able to reliably detect primal and dual infeasible problems from the algorithm iterates. The method also supports factorization caching and warm starting, making it particularly efficient when solving parametrized problems arising in finance, control, and machine learning. Our open-source C implementation OSQP has a small footprint, is library-free, and has been extensively tested on many problem instances from a wide variety of application areas. It is typically ten times faster than competing interior-point methods, and sometimes much more when factorization caching or warm start is used. OSQP has already shown a large impact with tens of thousands of users both in academia and in large corporations
    • …
    corecore