7 research outputs found

    Distributed Maximum Matching in Bounded Degree Graphs

    Full text link
    We present deterministic distributed algorithms for computing approximate maximum cardinality matchings and approximate maximum weight matchings. Our algorithm for the unweighted case computes a matching whose size is at least (1-\eps) times the optimal in \Delta^{O(1/\eps)} + O\left(\frac{1}{\eps^2}\right) \cdot\log^*(n) rounds where nn is the number of vertices in the graph and Δ\Delta is the maximum degree. Our algorithm for the edge-weighted case computes a matching whose weight is at least (1-\eps) times the optimal in \log(\min\{1/\wmin,n/\eps\})^{O(1/\eps)}\cdot(\Delta^{O(1/\eps)}+\log^*(n)) rounds for edge-weights in [\wmin,1]. The best previous algorithms for both the unweighted case and the weighted case are by Lotker, Patt-Shamir, and Pettie~(SPAA 2008). For the unweighted case they give a randomized (1-\eps)-approximation algorithm that runs in O((\log(n)) /\eps^3) rounds. For the weighted case they give a randomized (1/2-\eps)-approximation algorithm that runs in O(\log(\eps^{-1}) \cdot \log(n)) rounds. Hence, our results improve on the previous ones when the parameters Δ\Delta, \eps and \wmin are constants (where we reduce the number of runs from O(log(n))O(\log(n)) to O(log(n))O(\log^*(n))), and more generally when Δ\Delta, 1/\eps and 1/\wmin are sufficiently slowly increasing functions of nn. Moreover, our algorithms are deterministic rather than randomized.Comment: arXiv admin note: substantial text overlap with arXiv:1402.379

    Coresets Meet EDCS: Algorithms for Matching and Vertex Cover on Massive Graphs

    Full text link
    As massive graphs become more prevalent, there is a rapidly growing need for scalable algorithms that solve classical graph problems, such as maximum matching and minimum vertex cover, on large datasets. For massive inputs, several different computational models have been introduced, including the streaming model, the distributed communication model, and the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In each model, algorithms are analyzed in terms of resources such as space used or rounds of communication needed, in addition to the more traditional approximation ratio. In this paper, we give a single unified approach that yields better approximation algorithms for matching and vertex cover in all these models. The highlights include: * The first one pass, significantly-better-than-2-approximation for matching in random arrival streams that uses subquadratic space, namely a (1.5+ϵ)(1.5+\epsilon)-approximation streaming algorithm that uses O(n1.5)O(n^{1.5}) space for constant ϵ>0\epsilon > 0. * The first 2-round, better-than-2-approximation for matching in the MPC model that uses subquadratic space per machine, namely a (1.5+ϵ)(1.5+\epsilon)-approximation algorithm with O(mn+n)O(\sqrt{mn} + n) memory per machine for constant ϵ>0\epsilon > 0. By building on our unified approach, we further develop parallel algorithms in the MPC model that give a (1+ϵ)(1 + \epsilon)-approximation to matching and an O(1)O(1)-approximation to vertex cover in only O(loglogn)O(\log\log{n}) MPC rounds and O(n/polylog(n))O(n/poly\log{(n)}) memory per machine. These results settle multiple open questions posed in the recent paper of Czumaj~et.al. [STOC 2018]

    Set Cover in Sub-linear Time

    Full text link
    We study the classic set cover problem from the perspective of sub-linear algorithms. Given access to a collection of mm sets over nn elements in the query model, we show that sub-linear algorithms derived from existing techniques have almost tight query complexities. On one hand, first we show an adaptation of the streaming algorithm presented in Har-Peled et al. [2016] to the sub-linear query model, that returns an α\alpha-approximate cover using O~(m(n/k)1/(α1)+nk)\tilde{O}(m(n/k)^{1/(\alpha-1)} + nk) queries to the input, where kk denotes the value of a minimum set cover. We then complement this upper bound by proving that for lower values of kk, the required number of queries is Ω~(m(n/k)1/(2α))\tilde{\Omega}(m(n/k)^{1/(2\alpha)}), even for estimating the optimal cover size. Moreover, we prove that even checking whether a given collection of sets covers all the elements would require Ω(nk)\Omega(nk) queries. These two lower bounds provide strong evidence that the upper bound is almost tight for certain values of the parameter kk. On the other hand, we show that this bound is not optimal for larger values of the parameter kk, as there exists a (1+ε)(1+\varepsilon)-approximation algorithm with O~(mn/kε2)\tilde{O}(mn/k\varepsilon^2) queries. We show that this bound is essentially tight for sufficiently small constant ε\varepsilon, by establishing a lower bound of Ω~(mn/k)\tilde{\Omega}(mn/k) query complexity

    Average Sensitivity of Graph Algorithms

    Full text link
    In modern applications of graphs algorithms, where the graphs of interest are large and dynamic, it is unrealistic to assume that an input representation contains the full information of a graph being studied. Hence, it is desirable to use algorithms that, even when only a (large) subgraph is available, output solutions that are close to the solutions output when the whole graph is available. We formalize this idea by introducing the notion of average sensitivity of graph algorithms, which is the average earth mover's distance between the output distributions of an algorithm on a graph and its subgraph obtained by removing an edge, where the average is over the edges removed and the distance between two outputs is the Hamming distance. In this work, we initiate a systematic study of average sensitivity. After deriving basic properties of average sensitivity such as composition, we provide efficient approximation algorithms with low average sensitivities for concrete graph problems, including the minimum spanning forest problem, the global minimum cut problem, the minimum ss-tt cut problem, and the maximum matching problem. In addition, we prove that the average sensitivity of our global minimum cut algorithm is almost optimal, by showing a nearly matching lower bound. We also show that every algorithm for the 2-coloring problem has average sensitivity linear in the number of vertices. One of the main ideas involved in designing our algorithms with low average sensitivity is the following fact; if the presence of a vertex or an edge in the solution output by an algorithm can be decided locally, then the algorithm has a low average sensitivity, allowing us to reuse the analyses of known sublinear-time algorithms and local computation algorithms (LCAs). Using this connection, we show that every LCA for 2-coloring has linear query complexity, thereby answering an open question.Comment: 39 pages, 1 figur
    corecore