1,647 research outputs found

    Efficient pebbling for list traversal synopses

    Full text link
    We show how to support efficient back traversal in a unidirectional list, using small memory and with essentially no slowdown in forward steps. Using O(logn)O(\log n) memory for a list of size nn, the ii'th back-step from the farthest point reached so far takes O(logi)O(\log i) time in the worst case, while the overhead per forward step is at most ϵ\epsilon for arbitrary small constant ϵ>0\epsilon>0. An arbitrary sequence of forward and back steps is allowed. A full trade-off between memory usage and time per back-step is presented: kk vs. kn1/kkn^{1/k} and vice versa. Our algorithms are based on a novel pebbling technique which moves pebbles on a virtual binary, or tt-ary, tree that can only be traversed in a pre-order fashion. The compact data structures used by the pebbling algorithms, called list traversal synopses, extend to general directed graphs, and have other interesting applications, including memory efficient hash-chain implementation. Perhaps the most surprising application is in showing that for any program, arbitrary rollback steps can be efficiently supported with small overhead in memory, and marginal overhead in its ordinary execution. More concretely: Let PP be a program that runs for at most TT steps, using memory of size MM. Then, at the cost of recording the input used by the program, and increasing the memory by a factor of O(logT)O(\log T) to O(MlogT)O(M \log T), the program PP can be extended to support an arbitrary sequence of forward execution and rollback steps: the ii'th rollback step takes O(logi)O(\log i) time in the worst case, while forward steps take O(1) time in the worst case, and 1+ϵ1+\epsilon amortized time per step.Comment: 27 page

    Energy-Efficient Algorithms

    Full text link
    We initiate the systematic study of the energy complexity of algorithms (in addition to time and space complexity) based on Landauer's Principle in physics, which gives a lower bound on the amount of energy a system must dissipate if it destroys information. We propose energy-aware variations of three standard models of computation: circuit RAM, word RAM, and transdichotomous RAM. On top of these models, we build familiar high-level primitives such as control logic, memory allocation, and garbage collection with zero energy complexity and only constant-factor overheads in space and time complexity, enabling simple expression of energy-efficient algorithms. We analyze several classic algorithms in our models and develop low-energy variations: comparison sort, insertion sort, counting sort, breadth-first search, Bellman-Ford, Floyd-Warshall, matrix all-pairs shortest paths, AVL trees, binary heaps, and dynamic arrays. We explore the time/space/energy trade-off and develop several general techniques for analyzing algorithms and reducing their energy complexity. These results lay a theoretical foundation for a new field of semi-reversible computing and provide a new framework for the investigation of algorithms.Comment: 40 pages, 8 pdf figures, full version of work published in ITCS 201

    Parallel Weighted Random Sampling

    Get PDF
    Data structures for efficient sampling from a set of weighted items are an important building block of many applications. However, few parallel solutions are known. We close many of these gaps both for shared-memory and distributed-memory machines. We give efficient, fast, and practicable algorithms for sampling single items, k items with/without replacement, permutations, subsets, and reservoirs. We also give improved sequential algorithms for alias table construction and for sampling with replacement. Experiments on shared-memory parallel machines with up to 158 threads show near linear speedups both for construction and queries

    On space efficiency of algorithms working on structural decompositions of graphs

    Get PDF
    Dynamic programming on path and tree decompositions of graphs is a technique that is ubiquitous in the field of parameterized and exponential-time algorithms. However, one of its drawbacks is that the space usage is exponential in the decomposition's width. Following the work of Allender et al. [Theory of Computing, '14], we investigate whether this space complexity explosion is unavoidable. Using the idea of reparameterization of Cai and Juedes [J. Comput. Syst. Sci., '03], we prove that the question is closely related to a conjecture that the Longest Common Subsequence problem parameterized by the number of input strings does not admit an algorithm that simultaneously uses XP time and FPT space. Moreover, we complete the complexity landscape sketched for pathwidth and treewidth by Allender et al. by considering the parameter tree-depth. We prove that computations on tree-depth decompositions correspond to a model of non-deterministic machines that work in polynomial time and logarithmic space, with access to an auxiliary stack of maximum height equal to the decomposition's depth. Together with the results of Allender et al., this describes a hierarchy of complexity classes for polynomial-time non-deterministic machines with different restrictions on the access to working space, which mirrors the classic relations between treewidth, pathwidth, and tree-depth.Comment: An extended abstract appeared in the proceedings of STACS'16. The new version is augmented with a space-efficient algorithm for Dominating Set using the Chinese remainder theore

    Linear pattern matching on sparse suffix trees

    Get PDF
    Packing several characters into one computer word is a simple and natural way to compress the representation of a string and to speed up its processing. Exploiting this idea, we propose an index for a packed string, based on a {\em sparse suffix tree} \cite{KU-96} with appropriately defined suffix links. Assuming, under the standard unit-cost RAM model, that a word can store up to logσn\log_{\sigma}n characters (σ\sigma the alphabet size), our index takes O(n/logσn)O(n/\log_{\sigma}n) space, i.e. the same space as the packed string itself. The resulting pattern matching algorithm runs in time O(m+r2+rocc)O(m+r^2+r\cdot occ), where mm is the length of the pattern, rr is the actual number of characters stored in a word and occocc is the number of pattern occurrences

    Traversing combinatorial 0/1-polytopes via optimization

    Get PDF
    In this paper, we present a new framework that exploits combinatorial optimization for efficiently generating a large variety of combinatorial objects based on graphs, matroids, posets and polytopes. Our method relies on a simple and versatile algorithm for computing a Hamilton path on the skeleton of any 0/1-polytope \conv(X), where X\seq \{0,1\}^n. The algorithm uses as a black box any algorithm that solves a variant of the classical linear optimization problem~min{wxxX}\min\{w\cdot x\mid x\in X\}, and the resulting delay, i.e., the running time per visited vertex on the Hamilton path, is only by a factor of logn\log n larger than the running time of the optimization algorithm. When XX encodes a particular class of combinatorial objects, then traversing the skeleton of the polytope~\conv(X) along a Hamilton path corresponds to listing the combinatorial objects by local change operations, i.e., we obtain Gray code listings. As concrete results of our general framework, we obtain efficient algorithms for generating all (cc-optimal) bases and independent sets in a matroid; (cc-optimal) spanning trees, forests, matchings, maximum matchings, and cc-optimal matchings in a general graph; vertex covers, minimum vertex covers, cc-optimal vertex covers, stable sets, maximum stable sets and cc-optimal stable sets in a bipartite graph; as well as antichains, maximum antichains, cc-optimal antichains, and cc-optimal ideals of a poset. Specifically, the delay and space required by these algorithms are polynomial in the size of the matroid ground set, graph, or poset, respectively. Furthermore, all of these listings correspond to Hamilton paths on the corresponding combinatorial polytopes, namely the base polytope, matching polytope, vertex cover polytope, stable set polytope, chain polytope and order polytope, respectively. As another corollary from our framework, we obtain an \cO(t_{\upright{LP}} \log n) delay algorithm for the vertex enumeration problem on 0/1-polytopes {xRnAxb}\{x\in\mathbb{R}^n\mid Ax\leq b\}, where ARm×nA\in \mathbb{R}^{m\times n} and~bRmb\in\mathbb{R}^m, and t_{\upright{LP}} is the time needed to solve the linear program min{wxAxb}\min\{w\cdot x\mid Ax\leq b\}. This improves upon the 25-year old \cO(t_{\upright{LP}}\,n) delay algorithm due to Bussieck and L\"ubbecke
    corecore