260 research outputs found

    Fully dynamic approximate maximum matching and minimum vertex cover in O(log3 n) worst case update time

    Get PDF
    We consider the problem of maintaining an approximately maximum (fractional) matching and an approximately minimum vertex cover in a dynamic graph. Starting with the seminal paper by Onak and Rubinfeld [STOC 2010], this problem has received significant attention in recent years. There remains, however, a polynomial gap between the best known worst case update time and the best known amortised update time for this problem, even after allowing for randomisation. Specifically, Bernstein and Stein [ICALP 2015, SODA 2016] have the best known worst case update time. They present a deterministic data structure with approximation ratio (3/2 + ∊) and worst case update time O(m1/4/ ∊2), where m is the number of edges in the graph. In recent past, Gupta and Peng [FOCS 2013] gave a deterministic data structure with approximation ratio (1+ ∊) and worst case update time No known randomised data structure beats the worst case update times of these two results. In contrast, the paper by Onak and Rubinfeld [STOC 2010] gave a randomised data structure with approximation ratio O(1) and amortised update time O(log2 n), where n is the number of nodes in the graph. This was later improved by Baswana, Gupta and Sen [FOCS 2011] and Solomon [FOCS 2016], leading to a randomised date structure with approximation ratio 2 and amortised update time O(1). We bridge the polynomial gap between the worst case and amortised update times for this problem, without using any randomisation. We present a deterministic data structure with approximation ratio (2 + ∊) and worst case update time O(log3 n), for all sufficiently small constants ∊

    Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms

    Get PDF
    We present a simple randomized reduction from fully-dynamic integral matching algorithms to fully-dynamic "approximately-maximal" fractional matching algorithms. Applying this reduction to the recent fractional matching algorithm of Bhattacharya, Henzinger, and Nanongkai (SODA 2017), we obtain a novel result for the integral problem. Specifically, our main result is a randomized fully-dynamic (2+epsilon)-approximate integral matching algorithm with small polylog worst-case update time. For the (2+epsilon)-approximation regime only a fractional fully-dynamic (2+epsilon)-matching algorithm with worst-case polylog update time was previously known, due to Bhattacharya et al. (SODA 2017). Our algorithm is the first algorithm that maintains approximate matchings with worst-case update time better than polynomial, for any constant approximation ratio. As a consequence, we also obtain the first constant-approximate worst-case polylogarithmic update time maximum weight matching algorithm

    On Fully Dynamic Graph Sparsifiers

    No full text
    We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a (1±ϵ) (1 \pm \epsilon) -spectral sparsifier with amortized update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1}). Second, we give a fully dynamic algorithm for maintaining a (1±ϵ) (1 \pm \epsilon) -cut sparsifier with \emph{worst-case} update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1}). Both sparsifiers have size npoly(logn,ϵ1) n \cdot poly(\log{n}, \epsilon^{-1}). Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a (1+ϵ)(1 + \epsilon)-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1})

    Adaptive Out-Orientations with Applications

    Full text link
    We give simple algorithms for maintaining edge-orientations of a fully-dynamic graph, such that the out-degree of each vertex is bounded. On one hand, we show how to orient the edges such that the out-degree of each vertex is proportional to the arboricity α\alpha of the graph, in a worst-case update time of O(log2nlogα)O(\log^2 n \log \alpha). On the other hand, motivated by applications in dynamic maximal matching, we obtain a different trade-off, namely the improved worst case update time of O(lognlogα)O(\log n \log \alpha) for the problem of maintaining an edge-orientation with at most O(α+logn)O(\alpha + \log n) out-edges per vertex. Since our algorithms have update times with worst-case guarantees, the number of changes to the solution (i.e. the recourse) is naturally limited. Our algorithms make choices based entirely on local information, which makes them automatically adaptive to the current arboricity of the graph. In other words, they are arboricity-oblivious, while they are arboricity-sensitive. This both simplifies and improves upon previous work, by having fewer assumptions or better asymptotic guarantees. As a consequence, one obtains an algorithm with improved efficiency for maintaining a (1+ε)(1+\varepsilon) approximation of the maximum subgraph density, and an algorithm for dynamic maximal matching whose worst-case update time is guaranteed to be upper bounded by O(α+lognlogα)O(\alpha + \log n\log \alpha), where α\alpha is the arboricity at the time of the update

    An O(1)-Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity

    Get PDF
    This study considers the soft capacitated vertex cover problem in a dynamic setting. This problem generalizes the dynamic model of the vertex cover problem, which has been intensively studied in recent years. Given a dynamically changing vertex-weighted graph G=(V,E), which allows edge insertions and edge deletions, the goal is to design a data structure that maintains an approximate minimum vertex cover while satisfying the capacity constraint of each vertex. That is, when picking a copy of a vertex v in the cover, the number of v\u27s incident edges covered by the copy is up to a given capacity of v. We extend Bhattacharya et al.\u27s work [SODA\u2715 and ICALP\u2715] to obtain a deterministic primal-dual algorithm for maintaining a constant-factor approximate minimum capacitated vertex cover with O(log n / epsilon) amortized update time, where n is the number of vertices in the graph. The algorithm can be extended to (1) a more general model in which each edge is associated with a non-uniform and unsplittable demand, and (2) the more general capacitated set cover problem

    Fully Dynamic Effective Resistances

    Full text link
    In this paper we consider the \emph{fully-dynamic} All-Pairs Effective Resistance problem, where the goal is to maintain effective resistances on a graph GG among any pair of query vertices under an intermixed sequence of edge insertions and deletions in GG. The effective resistance between a pair of vertices is a physics-motivated quantity that encapsulates both the congestion and the dilation of a flow. It is directly related to random walks, and it has been instrumental in the recent works for designing fast algorithms for combinatorial optimization problems, graph sparsification, and network science. We give a data-structure that maintains (1+ϵ)(1+\epsilon)-approximations to all-pair effective resistances of a fully-dynamic unweighted, undirected multi-graph GG with O~(m4/5ϵ4)\tilde{O}(m^{4/5}\epsilon^{-4}) expected amortized update and query time, against an oblivious adversary. Key to our result is the maintenance of a dynamic \emph{Schur complement}~(also known as vertex resistance sparsifier) onto a set of terminal vertices of our choice. This maintenance is obtained (1) by interpreting the Schur complement as a sum of random walks and (2) by randomly picking the vertex subset into which the sparsifier is constructed. We can then show that each update in the graph affects a small number of such walks, which in turn leads to our sub-linear update time. We believe that this local representation of vertex sparsifiers may be of independent interest

    Connectivity Oracles for Graphs Subject to Vertex Failures

    Full text link
    We introduce new data structures for answering connectivity queries in graphs subject to batched vertex failures. A deterministic structure processes a batch of ddd\leq d_{\star} failed vertices in O~(d3)\tilde{O}(d^3) time and thereafter answers connectivity queries in O(d)O(d) time. It occupies space O(dmlogn)O(d_{\star} m\log n). We develop a randomized Monte Carlo version of our data structure with update time O~(d2)\tilde{O}(d^2), query time O(d)O(d), and space O~(m)\tilde{O}(m) for any failure bound dnd\le n. This is the first connectivity oracle for general graphs that can efficiently deal with an unbounded number of vertex failures. We also develop a more efficient Monte Carlo edge-failure connectivity oracle. Using space O(nlog2n)O(n\log^2 n), dd edge failures are processed in O(dlogdloglogn)O(d\log d\log\log n) time and thereafter, connectivity queries are answered in O(loglogn)O(\log\log n) time, which are correct w.h.p. Our data structures are based on a new decomposition theorem for an undirected graph G=(V,E)G=(V,E), which is of independent interest. It states that for any terminal set UVU\subseteq V we can remove a set BB of U/(s2)|U|/(s-2) vertices such that the remaining graph contains a Steiner forest for UBU-B with maximum degree ss

    Deterministic dynamic matching in worst-case update time

    Get PDF
    We present deterministic algorithms for maintaining a (3/2+ϵ) and (2+ϵ)-approximate maximum matching in a fully dynamic graph with worst-case update times O^(n−−√) and O~(1) respectively. The fastest known deterministic worst-case update time algorithms for achieving approximation ratio (2−δ) (for any δ>0) and (2+ϵ) were both shown by Roghani et al. [2021] with update times O(n3/4) and Oϵ(n−−√) respectively. We close the gap between worst-case and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are Oϵ(n−−√) and O~(1) which were shown in Bernstein and Stein [SODA'2021] and Bhattacharya and Kiss [ICALP'2021] respectively. In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak [STOC'2017] and Bernstein et al. [arXiv'2020] which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time. \textbf{Independent Work:} Independently and concurrently to our work Grandoni et al. [arXiv'2021] has presented a fully dynamic algorithm for maintaining a (3/2+ϵ)-approximate maximum matching with deterministic worst-case update time Oϵ(n−−√)
    corecore