241,229 research outputs found

    On the Complexity of Local Distributed Graph Problems

    Full text link
    This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)(\Delta+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size nn of the network, while the best deterministic complexity is typically 2O(logn)2^{O(\sqrt{\log n})}. Understanding and narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logO(1)n\log^{O(1)} n rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)(\Delta+1)-coloring) in logO(1)n\log^{O(1)} n rounds in the LOCAL model. We show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values

    Distributed Computation of Large-scale Graph Problems

    Full text link
    Motivated by the increasing need for fast distributed processing of large-scale graphs such as the Web graph and various social networks, we study a message-passing distributed computing model for graph processing and present lower bounds and algorithms for several graph problems. This work is inspired by recent large-scale graph processing systems (e.g., Pregel and Giraph) which are designed based on the message-passing model of distributed computing. Our model consists of a point-to-point communication network of kk machines interconnected by bandwidth-restricted links. Communicating data between the machines is the costly operation (as opposed to local computation). The network is used to process an arbitrary nn-node input graph (typically nk>1n \gg k > 1) that is randomly partitioned among the kk machines (a common implementation in many real world systems). Our goal is to study fundamental complexity bounds for solving graph problems in this model. We present techniques for obtaining lower bounds on the distributed time complexity. Our lower bounds develop and use new bounds in random-partition communication complexity. We first show a lower bound of Ω(n/k)\Omega(n/k) rounds for computing a spanning tree (ST) of the input graph. This result also implies the same bound for other fundamental problems such as computing a minimum spanning tree (MST). We also show an Ω(n/k2)\Omega(n/k^2) lower bound for connectivity, ST verification and other related problems. We give algorithms for various fundamental graph problems in our model. We show that problems such as PageRank, MST, connectivity, and graph covering can be solved in O~(n/k)\tilde{O}(n/k) time, whereas for shortest paths, we present algorithms that run in O~(n/k)\tilde{O}(n/\sqrt{k}) time (for (1+ϵ)(1+\epsilon)-factor approx.) and in O~(n/k)\tilde{O}(n/k) time (for O(logn)O(\log n)-factor approx.) respectively.Comment: In Proceedings of SODA 201

    Locally checkable proofs

    Get PDF
    This work studies decision problems from the perspective of nondeterministic distributed algorithms. For a yes-instance there must exist a proof that can be verified with a distributed algorithm: all nodes must accept a valid proof, and at least one node must reject an invalid proof. We focus on locally checkable proofs that can be verified with a constant-time distributed algorithm. For example, it is easy to prove that a graph is bipartite: the locally checkable proof gives a 2-colouring of the graph, which only takes 1 bit per node. However, it is more difficult to prove that a graph is not bipartite—it turns out that any locally checkable proof requires Ω(log n) bits per node. In this work we classify graph problems according to their local proof complexity, i.e., how many bits per node are needed in a locally checkable proof. We establish tight or near-tight results for classical graph properties such as the chromatic number. We show that the proof complexities form a natural hierarchy of complexity classes: for many classical graph problems, the proof complexity is either 0, Θ(1), Θ(log n), or poly(n) bits per node. Among the most difficult graph properties are symmetric graphs, which require Ω(n2) bits per node, and non-3-colourable graphs, which require Ω(n2/log n) bits per node—any pure graph property admits a trivial proof of size O(n2).Peer reviewe

    Memory and communication efficient algorithm for decentralized counting of nodes in networks

    Get PDF
    Node counting on a graph is subject to some fundamental theoretical limitations, yet a solution to such problems is necessary in many applications of graph theory to real-world systems, such as collective robotics and distributed sensor networks. Thus several stochastic and naïve deterministic algorithms for distributed graph size estimation or calculation have been provided. Here we present a deterministic and distributed algorithm that allows every node of a connected graph to determine the graph size in finite time, if an upper bound on the graph size is provided. The algorithm consists in the iterative aggregation of information in local hubs which then broadcast it throughout the whole graph. The proposed node-counting algorithm is on average more efficient in terms of node memory and communication cost than its previous deterministic counterpart for node counting, and appears comparable or more efficient in terms of average-case time complexity. As well as node counting, the algorithm is more broadly applicable to problems such as summation over graphs, quorum sensing, and spontaneous hierarchy creation

    The Distributed Complexity of Locally Checkable Labeling Problems Beyond Paths and Trees

    Full text link
    We consider locally checkable labeling LCL problems in the LOCAL model of distributed computing. Since 2016, there has been a substantial body of work examining the possible complexities of LCL problems. For example, it has been established that there are no LCL problems exhibiting deterministic complexities falling between ω(logn)\omega(\log^* n) and o(logn)o(\log n). This line of inquiry has yielded a wealth of algorithmic techniques and insights that are useful for algorithm designers. While the complexity landscape of LCL problems on general graphs, trees, and paths is now well understood, graph classes beyond these three cases remain largely unexplored. Indeed, recent research trends have shifted towards a fine-grained study of special instances within the domains of paths and trees. In this paper, we generalize the line of research on characterizing the complexity landscape of LCL problems to a much broader range of graph classes. We propose a conjecture that characterizes the complexity landscape of LCL problems for an arbitrary class of graphs that is closed under minors, and we prove a part of the conjecture. Some highlights of our findings are as follows. 1. We establish a simple characterization of the minor-closed graph classes sharing the same deterministic complexity landscape as paths, where O(1)O(1), Θ(logn)\Theta(\log^* n), and Θ(n)\Theta(n) are the only possible complexity classes. 2. It is natural to conjecture that any minor-closed graph class shares the same complexity landscape as trees if and only if the graph class has bounded treewidth and unbounded pathwidth. We prove the "only if" part of the conjecture. 3. In addition to the well-known complexity landscapes for paths, trees, and general graphs, there are infinitely many different complexity landscapes among minor-closed graph classes

    A Time Hierarchy Theorem for the LOCAL Model

    Full text link
    The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the distributed LOCAL model has been open for many years. It is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as O(1),O(logn),O(logn),2O(logn)O(1),O(\log^* n), O(\log n), 2^{O(\sqrt{\log n})}, etc. In this paper we establish the first time hierarchy theorem for the LOCAL model and prove that several gaps exist in the LOCAL time hierarchy. 1. We define an infinite set of simple coloring problems called Hierarchical 2122\frac{1}{2}-Coloring}. A correctly colored graph can be confirmed by simply checking the neighborhood of each vertex, so this problem fits into the class of locally checkable labeling (LCL) problems. However, the complexity of the kk-level Hierarchical 2122\frac{1}{2}-Coloring problem is Θ(n1/k)\Theta(n^{1/k}), for kZ+k\in\mathbb{Z}^+. The upper and lower bounds hold for both general graphs and trees, and for both randomized and deterministic algorithms. 2. Consider any LCL problem on bounded degree trees. We prove an automatic-speedup theorem that states that any randomized no(1)n^{o(1)}-time algorithm solving the LCL can be transformed into a deterministic O(logn)O(\log n)-time algorithm. Together with a previous result, this establishes that on trees, there are no natural deterministic complexities in the ranges ω(logn)\omega(\log^* n)---o(logn)o(\log n) or ω(logn)\omega(\log n)---no(1)n^{o(1)}. 3. We expose a gap in the randomized time hierarchy on general graphs. Any randomized algorithm that solves an LCL problem in sublogarithmic time can be sped up to run in O(TLLL)O(T_{LLL}) time, which is the complexity of the distributed Lovasz local lemma problem, currently known to be Ω(loglogn)\Omega(\log\log n) and O(logn)O(\log n)

    Exponential Speedup over Locality in MPC with Optimal Memory

    Get PDF
    Locally Checkable Labeling (LCL) problems are graph problems in which a solution is correct if it satisfies some given constraints in the local neighborhood of each node. Example problems in this class include maximal matching, maximal independent set, and coloring problems. A successful line of research has been studying the complexities of LCL problems on paths/cycles, trees, and general graphs, providing many interesting results for the LOCAL model of distributed computing. In this work, we initiate the study of LCL problems in the low-space Massively Parallel Computation (MPC) model. In particular, on forests, we provide a method that, given the complexity of an LCL problem in the LOCAL model, automatically provides an exponentially faster algorithm for the low-space MPC setting that uses optimal global memory, that is, truly linear. While restricting to forests may seem to weaken the result, we emphasize that all known (conditional) lower bounds for the MPC setting are obtained by lifting lower bounds obtained in the distributed setting in tree-like networks (either forests or high girth graphs), and hence the problems that we study are challenging already on forests. Moreover, the most important technical feature of our algorithms is that they use optimal global memory, that is, memory linear in the number of edges of the graph. In contrast, most of the state-of-the-art algorithms use more than linear global memory. Further, they typically start with a dense graph, sparsify it, and then solve the problem on the residual graph, exploiting the relative increase in global memory. On forests, this is not possible, because the given graph is already as sparse as it can be, and using optimal memory requires new solutions
    corecore