1,963 research outputs found

    Finding the Median (Obliviously) with Bounded Space

    Full text link
    We prove that any oblivious algorithm using space SS to find the median of a list of nn integers from {1,...,2n}\{1,...,2n\} requires time Ω(nloglogSn)\Omega(n \log\log_S n). This bound also applies to the problem of determining whether the median is odd or even. It is nearly optimal since Chan, following Munro and Raman, has shown that there is a (randomized) selection algorithm using only ss registers, each of which can store an input value or O(logn)O(\log n)-bit counter, that makes only O(loglogsn)O(\log\log_s n) passes over the input. The bound also implies a size lower bound for read-once branching programs computing the low order bit of the median and implies the analog of PNPcoNPP \ne NP \cap coNP for length o(nloglogn)o(n \log\log n) oblivious branching programs

    Streaming Complexity of Spanning Tree Computation

    Get PDF
    The semi-streaming model is a variant of the streaming model frequently used for the computation of graph problems. It allows the edges of an n-node input graph to be read sequentially in p passes using Õ(n) space. If the list of edges includes deletions, then the model is called the turnstile model; otherwise it is called the insertion-only model. In both models, some graph problems, such as spanning trees, k-connectivity, densest subgraph, degeneracy, cut-sparsifier, and (Δ+1)-coloring, can be exactly solved or (1+ε)-approximated in a single pass; while other graph problems, such as triangle detection and unweighted all-pairs shortest paths, are known to require Ω̃(n) passes to compute. For many fundamental graph problems, the tractability in these models is open. In this paper, we study the tractability of computing some standard spanning trees, including BFS, DFS, and maximum-leaf spanning trees. Our results, in both the insertion-only and the turnstile models, are as follows. Maximum-Leaf Spanning Trees: This problem is known to be APX-complete with inapproximability constant ρ ∈ [245/244, 2). By constructing an ε-MLST sparsifier, we show that for every constant ε > 0, MLST can be approximated in a single pass to within a factor of 1+ε w.h.p. (albeit in super-polynomial time for ε ≤ ρ-1 assuming P ≠ NP) and can be approximated in polynomial time in a single pass to within a factor of ρ_n+ε w.h.p., where ρ_n is the supremum constant that MLST cannot be approximated to within using polynomial time and Õ(n) space. In the insertion-only model, these algorithms can be deterministic. BFS Trees: It is known that BFS trees require ω(1) passes to compute, but the naïve approach needs O(n) passes. We devise a new randomized algorithm that reduces the pass complexity to O(√n), and it offers a smooth tradeoff between pass complexity and space usage. This gives a polynomial separation between single-source and all-pairs shortest paths for unweighted graphs. DFS Trees: It is unknown whether DFS trees require more than one pass. The current best algorithm by Khan and Mehta [STACS 2019] takes Õ(h) passes, where h is the height of computed DFS trees. Note that h can be as large as Ω(m/n) for n-node m-edge graphs. Our contribution is twofold. First, we provide a simple alternative proof of this result, via a new connection to sparse certificates for k-node-connectivity. Second, we present a randomized algorithm that reduces the pass complexity to O(√n), and it also offers a smooth tradeoff between pass complexity and space usage.ISSN:1868-896

    Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs

    Get PDF

    Space-Time Tradeoffs for Distributed Verification

    Full text link
    Verifying that a network configuration satisfies a given boolean predicate is a fundamental problem in distributed computing. Many variations of this problem have been studied, for example, in the context of proof labeling schemes (PLS), locally checkable proofs (LCP), and non-deterministic local decision (NLD). In all of these contexts, verification time is assumed to be constant. Korman, Kutten and Masuzawa [PODC 2011] presented a proof-labeling scheme for MST, with poly-logarithmic verification time, and logarithmic memory at each vertex. In this paper we introduce the notion of a tt-PLS, which allows the verification procedure to run for super-constant time. Our work analyzes the tradeoffs of tt-PLS between time, label size, message length, and computation space. We construct a universal tt-PLS and prove that it uses the same amount of total communication as a known one-round universal PLS, and tt factor smaller labels. In addition, we provide a general technique to prove lower bounds for space-time tradeoffs of tt-PLS. We use this technique to show an optimal tradeoff for testing that a network is acyclic (cycle free). Our optimal tt-PLS for acyclicity uses label size and computation space O((logn)/t)O((\log n)/t). We further describe a recursive O(logn)O(\log^* n) space verifier for acyclicity which does not assume previous knowledge of the run-time tt.Comment: Pre-proceedings version of paper presented at the 24th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2017

    The Random-Query Model and the Memory-Bounded Coupon Collector

    Get PDF

    Self-Improving Algorithms

    Full text link
    We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution D. We assume here that D is of product type. More precisely, suppose that we need to process a sequence I_1, I_2, ... of inputs I = (x_1, x_2, ..., x_n) of some fixed length n, where each x_i is drawn independently from some arbitrary, unknown distribution D_i. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution D = D_1 * D_2 * ... * D_n. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.Comment: 26 pages, 8 figures, preliminary versions appeared at SODA 2006 and SoCG 2008. Thorough revision to improve the presentation of the pape

    New Bounds for the Garden-Hose Model

    Get PDF
    We show new results about the garden-hose model. Our main results include improved lower bounds based on non-deterministic communication complexity (leading to the previously unknown Θ(n)\Theta(n) bounds for Inner Product mod 2 and Disjointness), as well as an O(nlog3n)O(n\cdot \log^3 n) upper bound for the Distributed Majority function (previously conjectured to have quadratic complexity). We show an efficient simulation of formulae made of AND, OR, XOR gates in the garden-hose model, which implies that lower bounds on the garden-hose complexity GH(f)GH(f) of the order Ω(n2+ϵ)\Omega(n^{2+\epsilon}) will be hard to obtain for explicit functions. Furthermore we study a time-bounded variant of the model, in which even modest savings in time can lead to exponential lower bounds on the size of garden-hose protocols.Comment: In FSTTCS 201

    On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?

    Get PDF

    Randomized vs. Deterministic Separation in Time-Space Tradeoffs of Multi-Output Functions

    Full text link
    We prove the first polynomial separation between randomized and deterministic time-space tradeoffs of multi-output functions. In particular, we present a total function that on the input of nn elements in [n][n], outputs O(n)O(n) elements, such that: (1) There exists a randomized oblivious algorithm with space O(logn)O(\log n), time O(nlogn)O(n\log n) and one-way access to randomness, that computes the function with probability 1O(1/n)1-O(1/n); (2) Any deterministic oblivious branching program with space SS and time TT that computes the function must satisfy T2SΩ(n2.5/logn)T^2S\geq\Omega(n^{2.5}/\log n). This implies that logspace randomized algorithms for multi-output functions cannot be black-box derandomized without an Ω~(n1/4)\widetilde{\Omega}(n^{1/4}) overhead in time. Since previously all the polynomial time-space tradeoffs of multi-output functions are proved via the Borodin-Cook method, which is a probabilistic method that inherently gives the same lower bound for randomized and deterministic branching programs, our lower bound proof is intrinsically different from previous works. We also examine other natural candidates for proving such separations, and show that any polynomial separation for these problems would resolve the long-standing open problem of proving n1+Ω(1)n^{1+\Omega(1)} time lower bound for decision problems with polylog(n)\mathrm{polylog}(n) space.Comment: 15 page
    corecore