272 research outputs found

    Sparsest Cut on Bounded Treewidth Graphs: Algorithms and Hardness Results

    Full text link
    We give a 2-approximation algorithm for Non-Uniform Sparsest Cut that runs in time nO(k)n^{O(k)}, where kk is the treewidth of the graph. This improves on the previous 22k2^{2^k}-approximation in time \poly(n) 2^{O(k)} due to Chlamt\'a\v{c} et al. To complement this algorithm, we show the following hardness results: If the Non-Uniform Sparsest Cut problem has a ρ\rho-approximation for series-parallel graphs (where ρ1\rho \geq 1), then the Max Cut problem has an algorithm with approximation factor arbitrarily close to 1/ρ1/\rho. Hence, even for such restricted graphs (which have treewidth 2), the Sparsest Cut problem is NP-hard to approximate better than 17/16ϵ17/16 - \epsilon for ϵ>0\epsilon > 0; assuming the Unique Games Conjecture the hardness becomes 1/αGWϵ1/\alpha_{GW} - \epsilon. For graphs with large (but constant) treewidth, we show a hardness result of 2ϵ2 - \epsilon assuming the Unique Games Conjecture. Our algorithm rounds a linear program based on (a subset of) the Sherali-Adams lift of the standard Sparsest Cut LP. We show that even for treewidth-2 graphs, the LP has an integrality gap close to 2 even after polynomially many rounds of Sherali-Adams. Hence our approach cannot be improved even on such restricted graphs without using a stronger relaxation

    On the optimality of gluing over scales

    Full text link
    We show that for every α>0\alpha > 0, there exist nn-point metric spaces (X,d) where every "scale" admits a Euclidean embedding with distortion at most α\alpha, but the whole space requires distortion at least Ω(αlogn)\Omega(\sqrt{\alpha \log n}). This shows that the scale-gluing lemma [Lee, SODA 2005] is tight, and disproves a conjecture stated there. This matching upper bound was known to be tight at both endpoints, i.e. when α=Θ(1)\alpha = \Theta(1) and α=Θ(logn)\alpha = \Theta(\log n), but nowhere in between. More specifically, we exhibit nn-point spaces with doubling constant λ\lambda requiring Euclidean distortion Ω(logλlogn)\Omega(\sqrt{\log \lambda \log n}), which also shows that the technique of "measured descent" [Krauthgamer, et. al., Geometric and Functional Analysis] is optimal. We extend this to obtain a similar tight result for LpL_p spaces with p>1p > 1.Comment: minor revision

    Integrality gaps of semidefinite programs for Vertex Cover and relations to 1\ell_1 embeddability of Negative Type metrics

    Get PDF
    We study various SDP formulations for {\sc Vertex Cover} by adding different constraints to the standard formulation. We show that {\sc Vertex Cover} cannot be approximated better than 2o(1)2-o(1) even when we add the so called pentagonal inequality constraints to the standard SDP formulation, en route answering an open question of Karakostas~\cite{Karakostas}. We further show the surprising fact that by strengthening the SDP with the (intractable) requirement that the metric interpretation of the solution is an 1\ell_1 metric, we get an exact relaxation (integrality gap is 1), and on the other hand if the solution is arbitrarily close to being 1\ell_1 embeddable, the integrality gap may be as big as 2o(1)2-o(1). Finally, inspired by the above findings, we use ideas from the integrality gap construction of Charikar \cite{Char02} to provide a family of simple examples for negative type metrics that cannot be embedded into 1\ell_1 with distortion better than 8/7-\eps. To this end we prove a new isoperimetric inequality for the hypercube.Comment: A more complete version. Changed order of results. A complete proof of (current) Theorem

    Towards a better approximation for sparsest cut?

    Full text link
    We give a new (1+ϵ)(1+\epsilon)-approximation for sparsest cut problem on graphs where small sets expand significantly more than the sparsest cut (sets of size n/rn/r expand by a factor lognlogr\sqrt{\log n\log r} bigger, for some small rr; this condition holds for many natural graph families). We give two different algorithms. One involves Guruswami-Sinop rounding on the level-rr Lasserre relaxation. The other is combinatorial and involves a new notion called {\em Small Set Expander Flows} (inspired by the {\em expander flows} of ARV) which we show exists in the input graph. Both algorithms run in time 2O(r)poly(n)2^{O(r)} \mathrm{poly}(n). We also show similar approximation algorithms in graphs with genus gg with an analogous local expansion condition. This is the first algorithm we know of that achieves (1+ϵ)(1+\epsilon)-approximation on such general family of graphs
    corecore