98,051 research outputs found

    Fast Distributed Approximation for Max-Cut

    Full text link
    Finding a maximum cut is a fundamental task in many computational settings. Surprisingly, it has been insufficiently studied in the classic distributed settings, where vertices communicate by synchronously sending messages to their neighbors according to the underlying graph, known as the LOCAL\mathcal{LOCAL} or CONGEST\mathcal{CONGEST} models. We amend this by obtaining almost optimal algorithms for Max-Cut on a wide class of graphs in these models. In particular, for any ϵ>0\epsilon > 0, we develop randomized approximation algorithms achieving a ratio of (1ϵ)(1-\epsilon) to the optimum for Max-Cut on bipartite graphs in the CONGEST\mathcal{CONGEST} model, and on general graphs in the LOCAL\mathcal{LOCAL} model. We further present efficient deterministic algorithms, including a 1/31/3-approximation for Max-Dicut in our models, thus improving the best known (randomized) ratio of 1/41/4. Our algorithms make non-trivial use of the greedy approach of Buchbinder et al. (SIAM Journal on Computing, 2015) for maximizing an unconstrained (non-monotone) submodular function, which may be of independent interest

    Parameterized Distributed Algorithms

    Get PDF
    In this work, we initiate a thorough study of graph optimization problems parameterized by the output size in the distributed setting. In such a problem, an algorithm decides whether a solution of size bounded by k exists and if so, it finds one. We study fundamental problems, including Minimum Vertex Cover (MVC), Maximum Independent Set (MaxIS), Maximum Matching (MaxM), and many others, in both the LOCAL and CONGEST distributed computation models. We present lower bounds for the round complexity of solving parameterized problems in both models, together with optimal and near-optimal upper bounds. Our results extend beyond the scope of parameterized problems. We show that any LOCAL (1+epsilon)-approximation algorithm for the above problems must take Omega(epsilon^{-1}) rounds. Joined with the (epsilon^{-1}log n)^{O(1)} rounds algorithm of [Ghaffari et al., 2017] and the Omega (sqrt{(log n)/(log log n)}) lower bound of [Fabian Kuhn et al., 2016], the lower bounds match the upper bound up to polynomial factors in both parameters. We also show that our parameterized approach reduces the runtime of exact and approximate CONGEST algorithms for MVC and MaxM if the optimal solution is small, without knowing its size beforehand. Finally, we propose the first o(n^2) rounds CONGEST algorithms that approximate MVC within a factor strictly smaller than 2

    Almost-Tight Distributed Minimum Cut Algorithms

    Full text link
    We study the problem of computing the minimum cut in a weighted distributed message-passing networks (the CONGEST model). Let λ\lambda be the minimum cut, nn be the number of nodes in the network, and DD be the network diameter. Our algorithm can compute λ\lambda exactly in O((nlogn+D)λ4log2n)O((\sqrt{n} \log^{*} n+D)\lambda^4 \log^2 n) time. To the best of our knowledge, this is the first paper that explicitly studies computing the exact minimum cut in the distributed setting. Previously, non-trivial sublinear time algorithms for this problem are known only for unweighted graphs when λ3\lambda\leq 3 due to Pritchard and Thurimella's O(D)O(D)-time and O(D+n1/2logn)O(D+n^{1/2}\log^* n)-time algorithms for computing 22-edge-connected and 33-edge-connected components. By using the edge sampling technique of Karger's, we can convert this algorithm into a (1+ϵ)(1+\epsilon)-approximation O((nlogn+D)ϵ5log3n)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^3 n)-time algorithm for any ϵ>0\epsilon>0. This improves over the previous (2+ϵ)(2+\epsilon)-approximation O((nlogn+D)ϵ5log2nloglogn)O((\sqrt{n}\log^{*} n+D)\epsilon^{-5}\log^2 n\log\log n)-time algorithm and O(ϵ1)O(\epsilon^{-1})-approximation O(D+n12+ϵpolylogn)O(D+n^{\frac{1}{2}+\epsilon} \mathrm{poly}\log n)-time algorithm of Ghaffari and Kuhn. Due to the lower bound of Ω(D+n1/2/logn)\Omega(D+n^{1/2}/\log n) by Das Sarma et al. which holds for any approximation algorithm, this running time is tight up to a polylogn \mathrm{poly}\log n factor. To get the stated running time, we developed an approximation algorithm which combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It saves an ϵ9log7n\epsilon^{-9}\log^{7} n factor as compared to applying Thorup's tree packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning algorithm and Karger's dynamic programming to achieve an efficient distributed algorithm that finds the minimum cut when we are given a spanning tree that crosses the minimum cut exactly once

    Sampling from large matrices: an approach through geometric functional analysis

    Full text link
    We study random submatrices of a large matrix A. We show how to approximately compute A from its random submatrix of the smallest possible size O(r log r) with a small error in the spectral norm, where r = ||A||_F^2 / ||A||_2^2 is the numerical rank of A. The numerical rank is always bounded by, and is a stable relaxation of, the rank of A. This yields an asymptotically optimal guarantee in an algorithm for computing low-rank approximations of A. We also prove asymptotically optimal estimates on the spectral norm and the cut-norm of random submatrices of A. The result for the cut-norm yields a slight improvement on the best known sample complexity for an approximation algorithm for MAX-2CSP problems. We use methods of Probability in Banach spaces, in particular the law of large numbers for operator-valued random variables.Comment: Our initial claim about Max-2-CSP problems is corrected. We put an exponential failure probability for the algorithm for low-rank approximations. Proofs are a little more explaine
    corecore