182 research outputs found

    Rounding Sum-of-Squares Relaxations

    Get PDF
    We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a *combining algorithm* -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a *rounding algorithm* that maps a solution of the relaxation to a solution of the original problem. Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems: 1) We give a quasipolynomial-time algorithm that approximates the maximum of a low degree multivariate polynomial with non-negative coefficients over the Euclidean unit sphere. Beyond being of interest in its own right, this is related to an open question in quantum information theory, and our techniques have already led to improved results in this area (Brand\~{a}o and Harrow, STOC '13). 2) We give a polynomial-time algorithm that, given a d dimensional subspace of R^n that (almost) contains the characteristic function of a set of size n/k, finds a vector vv in the subspace satisfying v44>c(k/d1/3)v22|v|_4^4 > c(k/d^{1/3}) |v|_2^2, where vp=(Eivip)1/p|v|_p = (E_i v_i^p)^{1/p}. Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem. 3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted μ\mu-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n nonzero coordinates, we can recover it with high probability whenever μ<O(min(1,n/d2))\mu < O(\min(1,n/d^2)), improving for d<n2/3d < n^{2/3} prior methods which intrinsically required μ<O(1/(d))\mu < O(1/\sqrt(d))

    Approximation Limits of Linear Programs (Beyond Hierarchies)

    Full text link
    We develop a framework for approximation limits of polynomial-size linear programs from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any linear program as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations for CLIQUE require linear programs of size 2^{n^\Omega(eps)}. (This lower bound applies to linear programs using a certain encoding of CLIQUE as a linear optimization problem.) Moreover, we establish a similar result for approximations of semidefinite programs by linear programs. Our main ingredient is a quantitative improvement of Razborov's rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of certain perturbations of the unique disjointness matrix.Comment: 23 pages, 2 figure

    Faster SDP hierarchy solvers for local rounding algorithms

    Full text link
    Convex relaxations based on different hierarchies of linear/semi-definite programs have been used recently to devise approximation algorithms for various optimization problems. The approximation guarantee of these algorithms improves with the number of {\em rounds} rr in the hierarchy, though the complexity of solving (or even writing down the solution for) the rr'th level program grows as nΩ(r)n^{\Omega(r)} where nn is the input size. In this work, we observe that many of these algorithms are based on {\em local} rounding procedures that only use a small part of the SDP solution (of size nO(1)2O(r)n^{O(1)} 2^{O(r)} instead of nΩ(r)n^{\Omega(r)}). We give an algorithm to find the requisite portion in time polynomial in its size. The challenge in achieving this is that the required portion of the solution is not fixed a priori but depends on other parts of the solution, sometimes in a complicated iterative manner. Our solver leads to nO(1)2O(r)n^{O(1)} 2^{O(r)} time algorithms to obtain the same guarantees in many cases as the earlier nO(r)n^{O(r)} time algorithms based on rr rounds of the Lasserre hierarchy. In particular, guarantees based on O(logn)O(\log n) rounds can be realized in polynomial time. We develop and describe our algorithm in a fairly general abstract framework. The main technical tool in our work, which might be of independent interest in convex optimization, is an efficient ellipsoid algorithm based separation oracle for convex programs that can output a {\em certificate of infeasibility with restricted support}. This is used in a recursive manner to find a sequence of consistent points in nested convex bodies that "fools" local rounding algorithms.Comment: 30 pages, 8 figure

    Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems

    Get PDF
    Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and Max-DiCut, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semidefinite relaxations are known. In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max-Cut, Max-2SAT, and MaxDiCut, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the Max-Cut problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalization

    New Dependencies of Hierarchies in Polynomial Optimization

    Full text link
    We compare four key hierarchies for solving Constrained Polynomial Optimization Problems (CPOP): Sum of Squares (SOS), Sum of Diagonally Dominant Polynomials (SDSOS), Sum of Nonnegative Circuits (SONC), and the Sherali Adams (SA) hierarchies. We prove a collection of dependencies among these hierarchies both for general CPOPs and for optimization problems on the Boolean hypercube. Key results include for the general case that the SONC and SOS hierarchy are polynomially incomparable, while SDSOS is contained in SONC. A direct consequence is the non-existence of a Putinar-like Positivstellensatz for SDSOS. On the Boolean hypercube, we show as a main result that Schm\"udgen-like versions of the hierarchies SDSOS*, SONC*, and SA* are polynomially equivalent. Moreover, we show that SA* is contained in any Schm\"udgen-like hierarchy that provides a O(n) degree bound.Comment: 26 pages, 4 figure
    corecore