68,054 research outputs found

    Sublinear-Time Distributed Algorithms for Detecting Small Cliques and Even Cycles

    Get PDF
    In this paper we give sublinear-time distributed algorithms in the CONGEST model for subgraph detection for two classes of graphs: cliques and even-length cycles. We show for the first time that all copies of 4-cliques and 5-cliques in the network graph can be listed in sublinear time, O(n^{5/6+o(1)}) rounds and O(n^{21/22+o(1)}) rounds, respectively. Prior to our work, it was not known whether it was possible to even check if the network contains a 4-clique or a 5-clique in sublinear time. For even-length cycles, C_{2k}, we give an improved sublinear-time algorithm, which exploits a new connection to extremal combinatorics. For example, for 6-cycles we improve the running time from O~(n^{5/6}) to O~(n^{3/4}) rounds. We also show two obstacles on proving lower bounds for C_{2k}-freeness: First, we use the new connection to extremal combinatorics to show that the current lower bound of Omega~(sqrt{n}) rounds for 6-cycle freeness cannot be improved using partition-based reductions from 2-party communication complexity, the technique by which all known lower bounds on subgraph detection have been proven to date. Second, we show that there is some fixed constant delta in (0,1/2) such that for any k, a Omega(n^{1/2+delta}) lower bound on C_{2k}-freeness implies new lower bounds in circuit complexity. For general subgraphs, it was shown in [Orr Fischer et al., 2018] that for any fixed k, there exists a subgraph H of size k such that H-freeness requires Omega~(n^{2-Theta(1/k)}) rounds. It was left as an open problem whether this is tight, or whether some constant-sized subgraph requires truly quadratic time to detect. We show that in fact, for any subgraph H of constant size k, the H-freeness problem can be solved in O(n^{2 - Theta(1/k)}) rounds, nearly matching the lower bound of [Orr Fischer et al., 2018]

    Complexity of Liveness in Parameterized Systems

    Get PDF
    We investigate the fine-grained complexity of liveness verification for leader contributor systems. These consist of a designated leader thread and an arbitrary number of identical contributor threads communicating via a shared memory. The liveness verification problem asks whether there is an infinite computation of the system in which the leader reaches a final state infinitely often. Like its reachability counterpart, the problem is known to be NP-complete. Our results show that, even from a fine-grained point of view, the complexities differ only by a polynomial factor. Liveness verification decomposes into reachability and cycle detection. We present a fixed point iteration solving the latter in polynomial time. For reachability, we reconsider the two standard parameterizations. When parameterized by the number of states of the leader L and the size of the data domain D, we show an (L + D)^O(L + D)-time algorithm. It improves on a previous algorithm, thereby settling an open problem. When parameterized by the number of states of the contributor C, we reuse an O^*(2^C)-time algorithm. We show how to connect both algorithms with the cycle detection to obtain algorithms for liveness verification. The running times of the composed algorithms match those of reachability, proving that the fine-grained lower bounds for liveness verification are met

    Exact Weight Subgraphs and the k-Sum Conjecture

    Full text link
    We consider the Exact-Weight-H problem of finding a (not necessarily induced) subgraph H of weight 0 in an edge-weighted graph G. We show that for every H, the complexity of this problem is strongly related to that of the infamous k-Sum problem. In particular, we show that under the k-Sum Conjecture, we can achieve tight upper and lower bounds for the Exact-Weight-H problem for various subgraphs H such as matching, star, path, and cycle. One interesting consequence is that improving on the O(n^3) upper bound for Exact-Weight-4-Path or Exact-Weight-5-Path will imply improved algorithms for 3-Sum, 5-Sum, All-Pairs Shortest Paths and other fundamental problems. This is in sharp contrast to the minimum-weight and (unweighted) detection versions, which can be solved easily in time O(n^2). We also show that a faster algorithm for any of the following three problems would yield faster algorithms for the others: 3-Sum, Exact-Weight-3-Matching, and Exact-Weight-3-Star

    Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs

    Get PDF

    Equivalence Classes and Conditional Hardness in Massively Parallel Computations

    Get PDF
    The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ? NP), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems. In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC(o(log N)), and some standard classes concerning space complexity, namely L and NL, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model

    Algebraic Methods in the Congested Clique

    Full text link
    In this work, we use algebraic methods for studying distance computation and subgraph detection tasks in the congested clique model. Specifically, we adapt parallel matrix multiplication implementations to the congested clique, obtaining an O(n1−2/ω)O(n^{1-2/\omega}) round matrix multiplication algorithm, where ω<2.3728639\omega < 2.3728639 is the exponent of matrix multiplication. In conjunction with known techniques from centralised algorithmics, this gives significant improvements over previous best upper bounds in the congested clique model. The highlight results include: -- triangle and 4-cycle counting in O(n0.158)O(n^{0.158}) rounds, improving upon the O(n1/3)O(n^{1/3}) triangle detection algorithm of Dolev et al. [DISC 2012], -- a (1+o(1))(1 + o(1))-approximation of all-pairs shortest paths in O(n0.158)O(n^{0.158}) rounds, improving upon the O~(n1/2)\tilde{O} (n^{1/2})-round (2+o(1))(2 + o(1))-approximation algorithm of Nanongkai [STOC 2014], and -- computing the girth in O(n0.158)O(n^{0.158}) rounds, which is the first non-trivial solution in this model. In addition, we present a novel constant-round combinatorial algorithm for detecting 4-cycles.Comment: This is work is a merger of arxiv:1412.2109 and arxiv:1412.266

    Homology-based Distributed Coverage Hole Detection in Wireless Sensor Networks

    Get PDF
    Homology theory provides new and powerful solutions to address the coverage problems in wireless sensor networks (WSNs). They are based on algebraic objects, such as Cech complex and Rips complex. Cech complex gives accurate information about coverage quality but requires a precise knowledge of the relative locations of nodes. This assumption is rather strong and hard to implement in practical deployments. Rips complex provides an approximation of Cech complex. It is easier to build and does not require any knowledge of nodes location. This simplicity is at the expense of accuracy. Rips complex can not always detect all coverage holes. It is then necessary to evaluate its accuracy. This work proposes to use the proportion of the area of undiscovered coverage holes as performance criteria. Investigations show that it depends on the ratio between communication and sensing radii of a sensor. Closed-form expressions for lower and upper bounds of the accuracy are also derived. For those coverage holes which can be discovered by Rips complex, a homology-based distributed algorithm is proposed to detect them. Simulation results are consistent with the proposed analytical lower bound, with a maximum difference of 0.5%. Upper bound performance depends on the ratio of communication and sensing radii. Simulations also show that the algorithm can localize about 99% coverage holes in about 99% cases
    • 

    corecore