43 research outputs found

    Guruswami-Sinop Rounding without Higher Level Lasserre

    Get PDF
    Guruswami and Sinop give a O(1/delta) approximation guarantee for the non-uniform Sparsest Cut problem by solving O(r)-level Lasserre semidefinite constraints, provided that the generalized eigenvalues of the Laplacians of the cost and demand graphs satisfy a certain spectral condition, namely, the (r+1)-th generalized eigenvalue is at least OPT/(1-delta). Their key idea is a rounding technique that first maps a vector-valued solution to [0,1] using appropriately scaled projections onto Lasserre vectors. In this paper, we show that similar projections and analysis can be obtained using only l_2^2 triangle inequality constraints. This results in a O(r/delta^2) approximation guarantee for the non-uniform Sparsest Cut problem by adding only l_2^2 triangle inequality constraints to the usual semidefinite program, provided that the same spectral condition, the (r+1)-th generalized eigenvalue is at least OPT/(1-delta), holds

    Towards a better approximation for sparsest cut?

    Full text link
    We give a new (1+ϵ)(1+\epsilon)-approximation for sparsest cut problem on graphs where small sets expand significantly more than the sparsest cut (sets of size n/rn/r expand by a factor lognlogr\sqrt{\log n\log r} bigger, for some small rr; this condition holds for many natural graph families). We give two different algorithms. One involves Guruswami-Sinop rounding on the level-rr Lasserre relaxation. The other is combinatorial and involves a new notion called {\em Small Set Expander Flows} (inspired by the {\em expander flows} of ARV) which we show exists in the input graph. Both algorithms run in time 2O(r)poly(n)2^{O(r)} \mathrm{poly}(n). We also show similar approximation algorithms in graphs with genus gg with an analogous local expansion condition. This is the first algorithm we know of that achieves (1+ϵ)(1+\epsilon)-approximation on such general family of graphs

    Faster SDP hierarchy solvers for local rounding algorithms

    Full text link
    Convex relaxations based on different hierarchies of linear/semi-definite programs have been used recently to devise approximation algorithms for various optimization problems. The approximation guarantee of these algorithms improves with the number of {\em rounds} rr in the hierarchy, though the complexity of solving (or even writing down the solution for) the rr'th level program grows as nΩ(r)n^{\Omega(r)} where nn is the input size. In this work, we observe that many of these algorithms are based on {\em local} rounding procedures that only use a small part of the SDP solution (of size nO(1)2O(r)n^{O(1)} 2^{O(r)} instead of nΩ(r)n^{\Omega(r)}). We give an algorithm to find the requisite portion in time polynomial in its size. The challenge in achieving this is that the required portion of the solution is not fixed a priori but depends on other parts of the solution, sometimes in a complicated iterative manner. Our solver leads to nO(1)2O(r)n^{O(1)} 2^{O(r)} time algorithms to obtain the same guarantees in many cases as the earlier nO(r)n^{O(r)} time algorithms based on rr rounds of the Lasserre hierarchy. In particular, guarantees based on O(logn)O(\log n) rounds can be realized in polynomial time. We develop and describe our algorithm in a fairly general abstract framework. The main technical tool in our work, which might be of independent interest in convex optimization, is an efficient ellipsoid algorithm based separation oracle for convex programs that can output a {\em certificate of infeasibility with restricted support}. This is used in a recursive manner to find a sequence of consistent points in nested convex bodies that "fools" local rounding algorithms.Comment: 30 pages, 8 figure

    Directed Steiner Tree and the Lasserre Hierarchy

    Full text link
    The goal for the Directed Steiner Tree problem is to find a minimum cost tree in a directed graph G=(V,E) that connects all terminals X to a given root r. It is well known that modulo a logarithmic factor it suffices to consider acyclic graphs where the nodes are arranged in L <= log |X| levels. Unfortunately the natural LP formulation has a |X|^(1/2) integrality gap already for 5 levels. We show that for every L, the O(L)-round Lasserre Strengthening of this LP has integrality gap O(L log |X|). This provides a polynomial time |X|^{epsilon}-approximation and a O(log^3 |X|) approximation in O(n^{log |X|) time, matching the best known approximation guarantee obtained by a greedy algorithm of Charikar et al.Comment: 23 pages, 1 figur

    Approximating CSPs with Outliers

    Get PDF

    Polynomial integrality gaps for strong SDP relaxations of Densest k

    Full text link
    The Densest k-subgraph problem (i.e. find a size k subgraph with maximum number of edges), is one of the notorious problems in approximation algorithms. There is a significant gap between known upper and lower bounds for Densest k-subgraph: the current best algorithm gives an ≈ O(n 1/4) approximation, while even showing a small constant factor hardness requires significantly stronger assumptions than P ̸ = NP. In addition to interest in designing better algorithms, a number of recent results have exploited the conjectured hardness of Densest k-subgraph and its variants. Thus, understanding the approximability of Densest k-subgraph is an important challenge. In this work, we give evidence for the hardness of approximating Densest k-subgraph within polynomial factors. Specifically, we expose the limitations of strong semidefinite programs from SDP hierarchies in solving Densest k-subgraph. Our results include: • A lower bound of Ω ( n 1/4 / log 3 n) on the integrality gap for Ω(log n / log log n) rounds of the Sherali-Adams relaxation for Densest k-subgraph. This also holds for the relaxation obtained from Sherali-Adams with an added SDP constraint. Our gap instances are i

    Rounding Sum-of-Squares Relaxations

    Get PDF
    We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a *combining algorithm* -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a *rounding algorithm* that maps a solution of the relaxation to a solution of the original problem. Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems: 1) We give a quasipolynomial-time algorithm that approximates the maximum of a low degree multivariate polynomial with non-negative coefficients over the Euclidean unit sphere. Beyond being of interest in its own right, this is related to an open question in quantum information theory, and our techniques have already led to improved results in this area (Brand\~{a}o and Harrow, STOC '13). 2) We give a polynomial-time algorithm that, given a d dimensional subspace of R^n that (almost) contains the characteristic function of a set of size n/k, finds a vector vv in the subspace satisfying v44>c(k/d1/3)v22|v|_4^4 > c(k/d^{1/3}) |v|_2^2, where vp=(Eivip)1/p|v|_p = (E_i v_i^p)^{1/p}. Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem. 3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted μ\mu-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n nonzero coordinates, we can recover it with high probability whenever μ<O(min(1,n/d2))\mu < O(\min(1,n/d^2)), improving for d<n2/3d < n^{2/3} prior methods which intrinsically required μ<O(1/(d))\mu < O(1/\sqrt(d))
    corecore