24,218 research outputs found

    A Tight Bound for Shortest Augmenting Paths on Trees

    Full text link
    The shortest augmenting path technique is one of the fundamental ideas used in maximum matching and maximum flow algorithms. Since being introduced by Edmonds and Karp in 1972, it has been widely applied in many different settings. Surprisingly, despite this extensive usage, it is still not well understood even in the simplest case: online bipartite matching problem on trees. In this problem a bipartite tree T=(WB,E)T=(W \uplus B, E) is being revealed online, i.e., in each round one vertex from BB with its incident edges arrives. It was conjectured by Chaudhuri et. al. [K. Chaudhuri, C. Daskalakis, R. D. Kleinberg, and H. Lin. Online bipartite perfect matching with augmentations. In INFOCOM 2009] that the total length of all shortest augmenting paths found is O(nlogn)O(n \log n). In this paper, we prove a tight O(nlogn)O(n \log n) upper bound for the total length of shortest augmenting paths for trees improving over O(nlog2n)O(n \log^2 n) bound [B. Bosek, D. Leniowski, P. Sankowski, and A. Zych. Shortest augmenting paths for online matchings on trees. In WAOA 2015].Comment: 22 pages, 10 figure

    Online matching with blocked input

    Get PDF
    In this paper, we examine the problem of "blocked online bipartite matching". This problem is similar to the online matching problem except that the vertices arrive in blocks instead of one at a time. Previously studied problems exist as special cases of this problem; the case where each block contains only a single vertex is the standard online matching problem studied by Karp et al. (1990), and the case where Mere is only one block (containing al/ vertices of the graph) is the offline matching problem (see, for example, the work by Aho et al. (1985)). The main result of this paper is that no performance gain (except in low-order terms) is possible by revealing the vertices in blocks, unless the number of blocks remains constant as n (the number of vertices) grows. Specifically, we show that if the number of vertices in a block is k = o(n), then the expected size of the matching produced by any algorithm (on its worst-case input) is at most (1 — 1/e)n + o(n). This is exactly the bound achieved in the original online matching problem, so no improvement is possible when k = o(n). This result follows from a more general upper bound that applies for all k = n; however, the bound does not appear to be tight for some values of k which are a constant fraction of n (in particular, for k = n/3). We also give an algorithm that makes use of the blocked structure of the input. On inputs with k = o(n), this algorithm can be shown to perform at least as well as using the algorithm from Karp et al. (1990) and ignoring blocking. Hence, by the upper bound, our algorithm is optimal to low-order terms for k = o(n), and in some cases considerably outperforms the algorithm of Karp et al. (1990). The algorithm also trivially has optimal performance for k = n; furthermore, it appears to have optimal performance for k = n/2, but a proof of this performance has not been found. Unfortunately, the algorithm does not meet the upper bound for all block sizes, as is shown by a simple example with block size n/3. We conjecture that the algorithm we present is actually optimal, and that the upper bound is not tight

    Distributed Maximum Matching in Bounded Degree Graphs

    Full text link
    We present deterministic distributed algorithms for computing approximate maximum cardinality matchings and approximate maximum weight matchings. Our algorithm for the unweighted case computes a matching whose size is at least (1-\eps) times the optimal in \Delta^{O(1/\eps)} + O\left(\frac{1}{\eps^2}\right) \cdot\log^*(n) rounds where nn is the number of vertices in the graph and Δ\Delta is the maximum degree. Our algorithm for the edge-weighted case computes a matching whose weight is at least (1-\eps) times the optimal in \log(\min\{1/\wmin,n/\eps\})^{O(1/\eps)}\cdot(\Delta^{O(1/\eps)}+\log^*(n)) rounds for edge-weights in [\wmin,1]. The best previous algorithms for both the unweighted case and the weighted case are by Lotker, Patt-Shamir, and Pettie~(SPAA 2008). For the unweighted case they give a randomized (1-\eps)-approximation algorithm that runs in O((\log(n)) /\eps^3) rounds. For the weighted case they give a randomized (1/2-\eps)-approximation algorithm that runs in O(\log(\eps^{-1}) \cdot \log(n)) rounds. Hence, our results improve on the previous ones when the parameters Δ\Delta, \eps and \wmin are constants (where we reduce the number of runs from O(log(n))O(\log(n)) to O(log(n))O(\log^*(n))), and more generally when Δ\Delta, 1/\eps and 1/\wmin are sufficiently slowly increasing functions of nn. Moreover, our algorithms are deterministic rather than randomized.Comment: arXiv admin note: substantial text overlap with arXiv:1402.379

    Cell-Probe Lower Bounds from Online Communication Complexity

    Full text link
    In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players, Bob, his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result each time before the next piece is revealed to Bob. This model has a closer and more natural correspondence to dynamic data structures than classic communication models do, and hence presents a new perspective on data structures. We first present a tight lower bound for the online set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. We then apply the online communication model to prove data structure lower bounds for two dynamic data structure problems: the Group Range problem and the Dynamic Connectivity problem for forests. Both of the problems admit a worst case O(logn)O(\log n)-time data structure. Using online communication complexity, we prove a tight cell-probe lower bound for each: spending o(logn)o(\log n) (even amortized) time per operation results in at best an exp(δ2n)\exp(-\delta^2 n) probability of correctly answering a (1/2+δ)(1/2+\delta)-fraction of the nn queries

    On Online Labeling with Polynomially Many Labels

    Full text link
    In the online labeling problem with parameters n and m we are presented with a sequence of n keys from a totally ordered universe U and must assign each arriving key a label from the label set {1,2,...,m} so that the order of labels (strictly) respects the ordering on U. As new keys arrive it may be necessary to change the labels of some items; such changes may be done at any time at unit cost for each change. The goal is to minimize the total cost. An alternative formulation of this problem is the file maintenance problem, in which the items, instead of being labeled, are maintained in sorted order in an array of length m, and we pay unit cost for moving an item. For the case m=cn for constant c>1, there are known algorithms that use at most O(n log(n)^2) relabelings in total [Itai, Konheim, Rodeh, 1981], and it was shown recently that this is asymptotically optimal [Bul\'anek, Kouck\'y, Saks, 2012]. For the case of m={\Theta}(n^C) for C>1, algorithms are known that use O(n log n) relabelings. A matching lower bound was claimed in [Dietz, Seiferas, Zhang, 2004]. That proof involved two distinct steps: a lower bound for a problem they call prefix bucketing and a reduction from prefix bucketing to online labeling. The reduction seems to be incorrect, leaving a (seemingly significant) gap in the proof. In this paper we close the gap by presenting a correct reduction to prefix bucketing. Furthermore we give a simplified and improved analysis of the prefix bucketing lower bound. This improvement allows us to extend the lower bounds for online labeling to the case where the number m of labels is superpolynomial in n. In particular, for superpolynomial m we get an asymptotically optimal lower bound {\Omega}((n log n) / (log log m - log log n)).Comment: 15 pages, Presented at European Symposium on Algorithms 201

    Best of Two Local Models: Local Centralized and Local Distributed Algorithms

    Full text link
    We consider two models of computation: centralized local algorithms and local distributed algorithms. Algorithms in one model are adapted to the other model to obtain improved algorithms. Distributed vertex coloring is employed to design improved centralized local algorithms for: maximal independent set, maximal matching, and an approximation scheme for maximum (weighted) matching over bounded degree graphs. The improvement is threefold: the algorithms are deterministic, stateless, and the number of probes grows polynomially in logn\log^* n, where nn is the number of vertices of the input graph. The recursive centralized local improvement technique by Nguyen and Onak~\cite{onak2008} is employed to obtain an improved distributed approximation scheme for maximum (weighted) matching. The improvement is twofold: we reduce the number of rounds from O(logn)O(\log n) to O(logn)O(\log^*n) for a wide range of instances and, our algorithms are deterministic rather than randomized

    Beating the Folklore Algorithm for Dynamic Matching

    Get PDF
    The maximum matching problem in dynamic graphs subject to edge updates (insertions and deletions) has received much attention over the last few years; a multitude of approximation/time tradeoffs were obtained, improving upon the folklore algorithm, which maintains a maximal (and hence 2-approximate) matching in O(n) worst-case update time in n-node graphs. We present the first deterministic algorithm which outperforms the folklore algorithm in terms of both approximation ratio and worst-case update time. Specifically, we give a (2-?(1))-approximate algorithm with O(m^{3/8}) = O(n^{3/4}) worst-case update time in n-node, m-edge graphs. For sufficiently small constant ? > 0, no deterministic (2+?)-approximate algorithm with worst-case update time O(n^{0.99}) was known. Our second result is the first deterministic (2+?)-approximate weighted matching algorithm with O_?(1)? O(?{m}) = O_?(1)? O(?n) worst-case update time. Neither of our results were previously known to be achievable by a randomized algorithm against an adaptive adversary. Our main technical contributions are threefold: first, we characterize the tight cases for kernels, which are the well-studied matching sparsifiers underlying much of the (2+?)-approximate dynamic matching literature. This characterization, together with multiple ideas - old and new - underlies our result for breaking the approximation barrier of 2. Our second technical contribution is the first example of a dynamic matching algorithm whose running time is improved due to improving the recourse of other dynamic matching algorithms. Finally, we show how to use dynamic bipartite matching algorithms as black-box subroutines for dynamic matching in general graphs without incurring the natural 3/2 factor in the approximation ratio which such approaches naturally incur (reminiscent of the integrality gap of the fractional matching polytope in general graphs)

    Sensitivity Analysis of the Maximum Matching Problem

    Get PDF
    We consider the sensitivity of algorithms for the maximum matching problem against edge and vertex modifications. Algorithms with low sensitivity are desirable because they are robust to edge failure or attack. In this work, we show a randomized (1ϵ)(1-\epsilon)-approximation algorithm with worst-case sensitivity Oϵ(1)O_{\epsilon}(1), which substantially improves upon the (1ϵ)(1-\epsilon)-approximation algorithm of Varma and Yoshida (arXiv 2020) that obtains average sensitivity nO(1/(1+ϵ2))n^{O(1/(1+\epsilon^2))} sensitivity algorithm, and show a deterministic 1/21/2-approximation algorithm with sensitivity exp(O(logn))\exp(O(\log^*n)) for bounded-degree graphs. We show that any deterministic constant-factor approximation algorithm must have sensitivity Ω(logn)\Omega(\log^* n). Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of nn whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge. As an application of our results, we give an algorithm for the online maximum matching with Oϵ(n)O_{\epsilon}(n) total replacements in the vertex-arrival model. By comparison, Bernstein et al. (J. ACM 2019) gave an online algorithm that always outputs the maximum matching, but only for bipartite graphs and with O(nlogn)O(n\log n) total replacements. Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter α>2\alpha>2, there exists an algorithm that outputs a 14α\frac{1}{4\alpha}-approximation to the maximum weighted matching in O(mlogαn)O(m\log_{\alpha} n) time, with normalized weighted sensitivity O(1)O(1). See paper for full abstract
    corecore