917 research outputs found

    A Time Hierarchy Theorem for the LOCAL Model

    Full text link
    The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the distributed LOCAL model has been open for many years. It is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as O(1),O(log⁡∗n),O(log⁥n),2O(log⁥n)O(1),O(\log^* n), O(\log n), 2^{O(\sqrt{\log n})}, etc. In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy. 1. We define an infinite set of simple coloring problems called Hierarchical 2122\frac{1}{2}-Coloring}. A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the kk-level Hierarchical 2122\frac{1}{2}-Coloring problem is Θ(n1/k)\Theta(n^{1/k}), for k∈Z+k\in\mathbb{Z}^+. The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms. 2. Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized no(1)n^{o(1)}-time algorithm solving the LCL can be transformed into a deterministic O(log⁥n)O(\log n)-time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges ω(log⁡∗n)\omega(\log^* n)---o(log⁥n)o(\log n) or ω(log⁥n)\omega(\log n)---no(1)n^{o(1)}. 3. We expose a gap in the randomized time hierarchy on general graphs. Any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in O(TLLL)O(T_{LLL}) time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be Ω(log⁥log⁥n)\Omega(\log\log n) and O(log⁥n)O(\log n)

    The Complexity of Distributed Edge Coloring with Small Palettes

    Full text link
    The complexity of distributed edge coloring depends heavily on the palette size as a function of the maximum degree Δ\Delta. In this paper we explore the complexity of edge coloring in the LOCAL model in different palette size regimes. 1. We simplify the \emph{round elimination} technique of Brandt et al. and prove that (2Δ−2)(2\Delta-2)-edge coloring requires Ω(log⁡Δlog⁥n)\Omega(\log_\Delta \log n) time w.h.p. and Ω(log⁡Δn)\Omega(\log_\Delta n) time deterministically, even on trees. The simplified technique is based on two ideas: the notion of an irregular running time and some general observations that transform weak lower bounds into stronger ones. 2. We give a randomized edge coloring algorithm that can use palette sizes as small as Δ+O~(Δ)\Delta + \tilde{O}(\sqrt{\Delta}), which is a natural barrier for randomized approaches. The running time of the algorithm is at most O(log⁡Δ⋅TLLL)O(\log\Delta \cdot T_{LLL}), where TLLLT_{LLL} is the complexity of a permissive version of the constructive Lovasz local lemma. 3. We develop a new distributed Lovasz local lemma algorithm for tree-structured dependency graphs, which leads to a (1+Ï”)Δ(1+\epsilon)\Delta-edge coloring algorithm for trees running in O(log⁥log⁥n)O(\log\log n) time. This algorithm arises from two new results: a deterministic O(log⁥n)O(\log n)-time LLL algorithm for tree-structured instances, and a randomized O(log⁥log⁥n)O(\log\log n)-time graph shattering method for breaking the dependency graph into independent O(log⁥n)O(\log n)-size LLL instances. 4. A natural approach to computing (Δ+1)(\Delta+1)-edge colorings (Vizing's theorem) is to extend partial colorings by iteratively re-coloring parts of the graph. We prove that this approach may be viable, but in the worst case requires recoloring subgraphs of diameter Ω(Δlog⁥n)\Omega(\Delta\log n). This stands in contrast to distributed algorithms for Brooks' theorem, which exploit the existence of O(log⁡Δn)O(\log_\Delta n)-length augmenting paths

    Streaming Complexity of Spanning Tree Computation

    Get PDF
    The semi-streaming model is a variant of the streaming model frequently used for the computation of graph problems. It allows the edges of an n-node input graph to be read sequentially in p passes using Õ(n) space. If the list of edges includes deletions, then the model is called the turnstile model; otherwise it is called the insertion-only model. In both models, some graph problems, such as spanning trees, k-connectivity, densest subgraph, degeneracy, cut-sparsifier, and (Δ+1)-coloring, can be exactly solved or (1+Δ)-approximated in a single pass; while other graph problems, such as triangle detection and unweighted all-pairs shortest paths, are known to require Ω̃(n) passes to compute. For many fundamental graph problems, the tractability in these models is open. In this paper, we study the tractability of computing some standard spanning trees, including BFS, DFS, and maximum-leaf spanning trees. Our results, in both the insertion-only and the turnstile models, are as follows. Maximum-Leaf Spanning Trees: This problem is known to be APX-complete with inapproximability constant ρ ∈ [245/244, 2). By constructing an Δ-MLST sparsifier, we show that for every constant Δ > 0, MLST can be approximated in a single pass to within a factor of 1+Δ w.h.p. (albeit in super-polynomial time for Δ ≀ ρ-1 assuming P ≠ NP) and can be approximated in polynomial time in a single pass to within a factor of ρ_n+Δ w.h.p., where ρ_n is the supremum constant that MLST cannot be approximated to within using polynomial time and Õ(n) space. In the insertion-only model, these algorithms can be deterministic. BFS Trees: It is known that BFS trees require ω(1) passes to compute, but the naĂŻve approach needs O(n) passes. We devise a new randomized algorithm that reduces the pass complexity to O(√n), and it offers a smooth tradeoff between pass complexity and space usage. This gives a polynomial separation between single-source and all-pairs shortest paths for unweighted graphs. DFS Trees: It is unknown whether DFS trees require more than one pass. The current best algorithm by Khan and Mehta [STACS 2019] takes Õ(h) passes, where h is the height of computed DFS trees. Note that h can be as large as Ω(m/n) for n-node m-edge graphs. Our contribution is twofold. First, we provide a simple alternative proof of this result, via a new connection to sparse certificates for k-node-connectivity. Second, we present a randomized algorithm that reduces the pass complexity to O(√n), and it also offers a smooth tradeoff between pass complexity and space usage.ISSN:1868-896

    Algorithms for Fast Aggregated Convergecast in Sensor Networks

    Get PDF
    Fast and periodic collection of aggregated data is of considerable interest for mission-critical and continuous monitoring applications in sensor networks. In the many-to-one communication paradigm, referred to as convergecast, we focus on applications wherein data packets are aggregated at each hop en-route to the sink along a tree-based routing topology, and address the problem of minimizing the convergecast schedule length by utilizing multiple frequency channels. The primary hindrance in minimizing the schedule length is the presence of interfering links. We prove that it is NP-complete to determine whether all the interfering links in an arbitrary network can be removed using at most a constant number of frequencies. We give a sufficient condition on the number of frequencies for which all the interfering links can be removed, and propose a polynomial time algorithm that minimizes the schedule length in this case. We also prove that minimizing the schedule length for a given number of frequencies on an arbitrary network is NP-complete, and describe a greedy scheme that gives a constant factor approximation on unit disk graphs. When the routing tree is not given as an input to the problem, we prove that a constant factor approximation is still achievable for degree-bounded trees. Finally, we evaluate our algorithms through simulations and compare their performance under different network parameters

    Finding Cycles and Trees in Sublinear Time

    Full text link
    We present sublinear-time (randomized) algorithms for finding simple cycles of length at least k≄3k\geq 3 and tree-minors in bounded-degree graphs. The complexity of these algorithms is related to the distance of the graph from being CkC_k-minor-free (resp., free from having the corresponding tree-minor). In particular, if the graph is far (i.e., Ω(1)\Omega(1)-far) {from} being cycle-free, i.e. if one has to delete a constant fraction of edges to make it cycle-free, then the algorithm finds a cycle of polylogarithmic length in time \tildeO(\sqrt{N}), where NN denotes the number of vertices. This time complexity is optimal up to polylogarithmic factors. The foregoing results are the outcome of our study of the complexity of {\em one-sided error} property testing algorithms in the bounded-degree graphs model. For example, we show that cycle-freeness of NN-vertex graphs can be tested with one-sided error within time complexity \tildeO(\poly(1/\e)\cdot\sqrt{N}). This matches the known Ω(N)\Omega(\sqrt{N}) query lower bound, and contrasts with the fact that any minor-free property admits a {\em two-sided error} tester of query complexity that only depends on the proximity parameter \e. For any constant k≄3k\geq3, we extend this result to testing whether the input graph has a simple cycle of length at least kk. On the other hand, for any fixed tree TT, we show that TT-minor-freeness has a one-sided error tester of query complexity that only depends on the proximity parameter \e. Our algorithm for finding cycles in bounded-degree graphs extends to general graphs, where distances are measured with respect to the actual number of edges. Such an extension is not possible with respect to finding tree-minors in o(N)o(\sqrt{N}) complexity.Comment: Keywords: Sublinear-Time Algorithms, Property Testing, Bounded-Degree Graphs, One-Sided vs Two-Sided Error Probability Updated versio

    Computing in Additive Networks with Bounded-Information Codes

    Full text link
    This paper studies the theory of the additive wireless network model, in which the received signal is abstracted as an addition of the transmitted signals. Our central observation is that the crucial challenge for computing in this model is not high contention, as assumed previously, but rather guaranteeing a bounded amount of \emph{information} in each neighborhood per round, a property that we show is achievable using a new random coding technique. Technically, we provide efficient algorithms for fundamental distributed tasks in additive networks, such as solving various symmetry breaking problems, approximating network parameters, and solving an \emph{asymmetry revealing} problem such as computing a maximal input. The key method used is a novel random coding technique that allows a node to successfully decode the received information, as long as it does not contain too many distinct values. We then design our algorithms to produce a limited amount of information in each neighborhood in order to leverage our enriched toolbox for computing in additive networks

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201
    • 

    corecore