1,407 research outputs found

    Decomposition-Based Method for Sparse Semidefinite Relaxations of Polynomial Optimization Problems

    Get PDF
    We consider polynomial optimization problems pervaded by a sparsity pattern. It has been shown in [1, 2] that the optimal solution of a polynomial programming problem with structured sparsity can be computed by solving a series of semidefinite relaxations that possess the same kind of sparsity. We aim at solving the former relaxations with a decompositionbased method, which partitions the relaxations according to their sparsity pattern. The decomposition-based method that we propose is an extension to semidefinite programming of the Benders decomposition for linear programs [3] .Polynomial optimization, Semidefinite programming, Sparse SDP relaxations, Benders decomposition

    Linear Programming Relaxations of Quadratically Constrained Quadratic Programs

    Full text link
    We investigate the use of linear programming tools for solving semidefinite programming relaxations of quadratically constrained quadratic problems. Classes of valid linear inequalities are presented, including sparse PSD cuts, and principal minors PSD cuts. Computational results based on instances from the literature are presented.Comment: Published in IMA Volumes in Mathematics and its Applications, 2012, Volume 15

    On Semidefinite Programming Relaxations of the Travelling Salesman Problem (Replaced by DP 2008-96)

    Get PDF
    AMS classification: 90C22, 20Cxx, 70-08traveling salesman problem;semidefinite programming;quadratic as- signment problem

    A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    Get PDF
    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on semidefinite programming. Our algorithm can solve problems that can not be handled by any of known polynomial optimization solvers.Multi-period portfolio optimization;Polynomial optimization problem;Constant rebalancing;Semidefinite programming;Mean-variance criterion

    Equivalent relaxations of optimal power flow

    Get PDF
    Several convex relaxations of the optimal power flow (OPF) problem have recently been developed using both bus injection models and branch flow models. In this paper, we prove relations among three convex relaxations: a semidefinite relaxation that computes a full matrix, a chordal relaxation based on a chordal extension of the network graph, and a second-order cone relaxation that computes the smallest partial matrix. We prove a bijection between the feasible sets of the OPF in the bus injection model and the branch flow model, establishing the equivalence of these two models and their second-order cone relaxations. Our results imply that, for radial networks, all these relaxations are equivalent and one should always solve the second-order cone relaxation. For mesh networks, the semidefinite relaxation is tighter than the second-order cone relaxation but requires a heavier computational effort, and the chordal relaxation strikes a good balance. Simulations are used to illustrate these results.Comment: 12 pages, 7 figure

    Online Local Learning via Semidefinite Programming

    Full text link
    In many online learning problems we are interested in predicting local information about some universe of items. For example, we may want to know whether two items are in the same cluster rather than computing an assignment of items to clusters; we may want to know which of two teams will win a game rather than computing a ranking of teams. Although finding the optimal clustering or ranking is typically intractable, it may be possible to predict the relationships between items as well as if you could solve the global optimization problem exactly. Formally, we consider an online learning problem in which a learner repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial payoff depending on those labels. The learner's goal is to receive a payoff nearly as good as the best fixed labeling of the items. We show that a simple algorithm based on semidefinite programming can obtain asymptotically optimal regret in the case where the number of possible labels is O(1), resolving an open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical contribution is a novel use and analysis of the log determinant regularizer, exploiting the observation that log det(A + I) upper bounds the entropy of any distribution with covariance matrix A.Comment: 10 page
    • ā€¦
    corecore