85 research outputs found

    Gap Amplification for Small-Set Expansion via Random Walks

    Get PDF
    In this work, we achieve gap amplification for the Small-Set Expansion problem. Specifically, we show that an instance of the Small-Set Expansion Problem with completeness ϵ\epsilon and soundness 12\frac{1}{2} is at least as difficult as Small-Set Expansion with completeness ϵ\epsilon and soundness f(ϵ)f(\epsilon), for any function f(ϵ)f(\epsilon) which grows faster than ϵ\sqrt{\epsilon}. We achieve this amplification via random walks -- our gadget is the graph with adjacency matrix corresponding to a random walk on the original graph. An interesting feature of our reduction is that unlike gap amplification via parallel repetition, the size of the instances (number of vertices) produced by the reduction remains the same

    A New Regularity Lemma and Faster Approximation Algorithms for Low Threshold Rank Graphs

    Full text link
    Kolla and Tulsiani [KT07,Kolla11} and Arora, Barak and Steurer [ABS10] introduced the technique of subspace enumeration, which gives approximation algorithms for graph problems such as unique games and small set expansion; the running time of such algorithms is exponential in the threshold-rank of the graph. Guruswami and Sinop [GS11,GS12], and Barak, Raghavendra, and Steurer [BRS11] developed an alternative approach to the design of approximation algorithms for graphs of bounded threshold-rank, based on semidefinite programming relaxations in the Lassere hierarchy and on novel rounding techniques. These algorithms are faster than the ones based on subspace enumeration and work on a broad class of problems. In this paper we develop a third approach to the design of such algorithms. We show, constructively, that graphs of bounded threshold-rank satisfy a weak Szemeredi regularity lemma analogous to the one proved by Frieze and Kannan [FK99] for dense graphs. The existence of efficient approximation algorithms is then a consequence of the regularity lemma, as shown by Frieze and Kannan. Applying our method to the Max Cut problem, we devise an algorithm that is faster than all previous algorithms, and is easier to describe and analyze

    Many Sparse Cuts via Higher Eigenvalues

    Full text link
    Cheeger's fundamental inequality states that any edge-weighted graph has a vertex subset SS such that its expansion (a.k.a. conductance) is bounded as follows: \phi(S) \defeq \frac{w(S,\bar{S})}{\min \set{w(S), w(\bar{S})}} \leq 2\sqrt{\lambda_2} where ww is the total edge weight of a subset or a cut and λ2\lambda_2 is the second smallest eigenvalue of the normalized Laplacian of the graph. Here we prove the following natural generalization: for any integer k[n]k \in [n], there exist ckck disjoint subsets S1,...,SckS_1, ..., S_{ck}, such that maxiϕ(Si)Cλklogk \max_i \phi(S_i) \leq C \sqrt{\lambda_{k} \log k} where λi\lambda_i is the ithi^{th} smallest eigenvalue of the normalized Laplacian and c0c0 are suitable absolute constants. Our proof is via a polynomial-time algorithm to find such subsets, consisting of a spectral projection and a randomized rounding. As a consequence, we get the same upper bound for the small set expansion problem, namely for any kk, there is a subset SS whose weight is at most a \bigO(1/k) fraction of the total weight and ϕ(S)Cλklogk\phi(S) \le C \sqrt{\lambda_k \log k}. Both results are the best possible up to constant factors. The underlying algorithmic problem, namely finding kk subsets such that the maximum expansion is minimized, besides extending sparse cuts to more than one subset, appears to be a natural clustering problem in its own right
    corecore