645 research outputs found

    Lower Bounds for Structuring Unreliable Radio Networks

    Full text link
    In this paper, we study lower bounds for randomized solutions to the maximal independent set (MIS) and connected dominating set (CDS) problems in the dual graph model of radio networks---a generalization of the standard graph-based model that now includes unreliable links controlled by an adversary. We begin by proving that a natural geographic constraint on the network topology is required to solve these problems efficiently (i.e., in time polylogarthmic in the network size). We then prove the importance of the assumption that nodes are provided advance knowledge of their reliable neighbors (i.e, neighbors connected by reliable links). Combined, these results answer an open question by proving that the efficient MIS and CDS algorithms from [Censor-Hillel, PODC 2011] are optimal with respect to their dual graph model assumptions. They also provide insight into what properties of an unreliable network enable efficient local computation.Comment: An extended abstract of this work appears in the 2014 proceedings of the International Symposium on Distributed Computing (DISC

    Distributed Symmetry Breaking in Hypergraphs

    Full text link
    Fundamental local symmetry breaking problems such as Maximal Independent Set (MIS) and coloring have been recognized as important by the community, and studied extensively in (standard) graphs. In particular, fast (i.e., logarithmic run time) randomized algorithms are well-established for MIS and Δ+1\Delta +1-coloring in both the LOCAL and CONGEST distributed computing models. On the other hand, comparatively much less is known on the complexity of distributed symmetry breaking in {\em hypergraphs}. In particular, a key question is whether a fast (randomized) algorithm for MIS exists for hypergraphs. In this paper, we study the distributed complexity of symmetry breaking in hypergraphs by presenting distributed randomized algorithms for a variety of fundamental problems under a natural distributed computing model for hypergraphs. We first show that MIS in hypergraphs (of arbitrary dimension) can be solved in O(log2n)O(\log^2 n) rounds (nn is the number of nodes of the hypergraph) in the LOCAL model. We then present a key result of this paper --- an O(Δϵpolylog(n))O(\Delta^{\epsilon}\text{polylog}(n))-round hypergraph MIS algorithm in the CONGEST model where Δ\Delta is the maximum node degree of the hypergraph and ϵ>0\epsilon > 0 is any arbitrarily small constant. To demonstrate the usefulness of hypergraph MIS, we present applications of our hypergraph algorithm to solving problems in (standard) graphs. In particular, the hypergraph MIS yields fast distributed algorithms for the {\em balanced minimal dominating set} problem (left open in Harris et al. [ICALP 2013]) and the {\em minimal connected dominating set problem}. We also present distributed algorithms for coloring, maximal matching, and maximal clique in hypergraphs.Comment: Changes from the previous version: More references adde

    Exact bounds for distributed graph colouring

    Full text link
    We prove exact bounds on the time complexity of distributed graph colouring. If we are given a directed path that is properly coloured with nn colours, by prior work it is known that we can find a proper 3-colouring in 12log(n)±O(1)\frac{1}{2} \log^*(n) \pm O(1) communication rounds. We close the gap between upper and lower bounds: we show that for infinitely many nn the time complexity is precisely 12logn\frac{1}{2} \log^* n communication rounds.Comment: 16 pages, 3 figure

    Limitations to Frechet's Metric Embedding Method

    Full text link
    Frechet's classical isometric embedding argument has evolved to become a major tool in the study of metric spaces. An important example of a Frechet embedding is Bourgain's embedding. The authors have recently shown that for every e>0 any n-point metric space contains a subset of size at least n^(1-e) which embeds into l_2 with distortion O(\log(2/e) /e). The embedding we used is non-Frechet, and the purpose of this note is to show that this is not coincidental. Specifically, for every e>0, we construct arbitrarily large n-point metric spaces, such that the distortion of any Frechet embedding into l_p on subsets of size at least n^{1/2 + e} is \Omega((\log n)^{1/p}).Comment: 10 pages, 1 figur

    A strong direct product theorem for quantum query complexity

    Full text link
    We show that quantum query complexity satisfies a strong direct product theorem. This means that computing kk copies of a function with less than kk times the quantum queries needed to compute one copy of the function implies that the overall success probability will be exponentially small in kk. For a boolean function ff we also show an XOR lemma---computing the parity of kk copies of ff with less than kk times the queries needed for one copy implies that the advantage over random guessing will be exponentially small. We do this by showing that the multiplicative adversary method, which inherently satisfies a strong direct product theorem, is always at least as large as the additive adversary method, which is known to characterize quantum query complexity.Comment: V2: 19 pages (various additions and improvements, in particular: improved parameters in the main theorems due to a finer analysis of the output condition, and addition of an XOR lemma and a threshold direct product theorem in the boolean case). V3: 19 pages (added grant information

    How Long It Takes for an Ordinary Node with an Ordinary ID to Output?

    Full text link
    In the context of distributed synchronous computing, processors perform in rounds, and the time-complexity of a distributed algorithm is classically defined as the number of rounds before all computing nodes have output. Hence, this complexity measure captures the running time of the slowest node(s). In this paper, we are interested in the running time of the ordinary nodes, to be compared with the running time of the slowest nodes. The node-averaged time-complexity of a distributed algorithm on a given instance is defined as the average, taken over every node of the instance, of the number of rounds before that node output. We compare the node-averaged time-complexity with the classical one in the standard LOCAL model for distributed network computing. We show that there can be an exponential gap between the node-averaged time-complexity and the classical time-complexity, as witnessed by, e.g., leader election. Our first main result is a positive one, stating that, in fact, the two time-complexities behave the same for a large class of problems on very sparse graphs. In particular, we show that, for LCL problems on cycles, the node-averaged time complexity is of the same order of magnitude as the slowest node time-complexity. In addition, in the LOCAL model, the time-complexity is computed as a worst case over all possible identity assignments to the nodes of the network. In this paper, we also investigate the ID-averaged time-complexity, when the number of rounds is averaged over all possible identity assignments. Our second main result is that the ID-averaged time-complexity is essentially the same as the expected time-complexity of randomized algorithms (where the expectation is taken over all possible random bits used by the nodes, and the number of rounds is measured for the worst-case identity assignment). Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio

    Locality of not-so-weak coloring

    Get PDF
    Many graph problems are locally checkable: a solution is globally feasible if it looks valid in all constant-radius neighborhoods. This idea is formalized in the concept of locally checkable labelings (LCLs), introduced by Naor and Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree graphs, every LCL problem belongs to one of the following classes: - "Easy": solvable in O(logn)O(\log^* n) rounds with both deterministic and randomized distributed algorithms. - "Hard": requires at least Ω(logn)\Omega(\log n) rounds with deterministic and Ω(loglogn)\Omega(\log \log n) rounds with randomized distributed algorithms. Hence for any parameterized LCL problem, when we move from local problems towards global problems, there is some point at which complexity suddenly jumps from easy to hard. For example, for vertex coloring in dd-regular graphs it is now known that this jump is at precisely dd colors: coloring with d+1d+1 colors is easy, while coloring with dd colors is hard. However, it is currently poorly understood where this jump takes place when one looks at defective colorings. To study this question, we define kk-partial cc-coloring as follows: nodes are labeled with numbers between 11 and cc, and every node is incident to at least kk properly colored edges. It is known that 11-partial 22-coloring (a.k.a. weak 22-coloring) is easy for any d1d \ge 1. As our main result, we show that kk-partial 22-coloring becomes hard as soon as k2k \ge 2, no matter how large a dd we have. We also show that this is fundamentally different from kk-partial 33-coloring: no matter which k3k \ge 3 we choose, the problem is always hard for d=kd = k but it becomes easy when dkd \gg k. The same was known previously for partial cc-coloring with c4c \ge 4, but the case of c<4c < 4 was open

    Pseudorandomness for Regular Branching Programs via Fourier Analysis

    Full text link
    We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is O(log2n)O(\log^2 n), where nn is the length of the branching program. The previous best seed length known for this model was n1/2+o(1)n^{1/2+o(1)}, which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of s1/2+o(1)s^{1/2+o(1)} for arbitrary branching programs of size ss). Our techniques also give seed length n1/2+o(1)n^{1/2+o(1)} for general oblivious, read-once branching programs of width 2no(1)2^{n^{o(1)}}, which is incomparable to the results of Impagliazzo et al.Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width ww has Fourier mass at most (2w2)k(2w^2)^k at level kk, independent of the length of the program.Comment: RANDOM 201

    FPTAS for Weighted Fibonacci Gates and Its Applications

    Full text link
    Fibonacci gate problems have severed as computation primitives to solve other problems by holographic algorithm and play an important role in the dichotomy of exact counting for Holant and CSP frameworks. We generalize them to weighted cases and allow each vertex function to have different parameters, which is a much boarder family and #P-hard for exactly counting. We design a fully polynomial-time approximation scheme (FPTAS) for this generalization by correlation decay technique. This is the first deterministic FPTAS for approximate counting in the general Holant framework without a degree bound. We also formally introduce holographic reduction in the study of approximate counting and these weighted Fibonacci gate problems serve as computation primitives for approximate counting. Under holographic reduction, we obtain FPTAS for other Holant problems and spin problems. One important application is developing an FPTAS for a large range of ferromagnetic two-state spin systems. This is the first deterministic FPTAS in the ferromagnetic range for two-state spin systems without a degree bound. Besides these algorithms, we also develop several new tools and techniques to establish the correlation decay property, which are applicable in other problems

    Computing in Additive Networks with Bounded-Information Codes

    Full text link
    This paper studies the theory of the additive wireless network model, in which the received signal is abstracted as an addition of the transmitted signals. Our central observation is that the crucial challenge for computing in this model is not high contention, as assumed previously, but rather guaranteeing a bounded amount of \emph{information} in each neighborhood per round, a property that we show is achievable using a new random coding technique. Technically, we provide efficient algorithms for fundamental distributed tasks in additive networks, such as solving various symmetry breaking problems, approximating network parameters, and solving an \emph{asymmetry revealing} problem such as computing a maximal input. The key method used is a novel random coding technique that allows a node to successfully decode the received information, as long as it does not contain too many distinct values. We then design our algorithms to produce a limited amount of information in each neighborhood in order to leverage our enriched toolbox for computing in additive networks
    corecore