66 research outputs found

    Efficient Circuit Simulation in MapReduce

    Get PDF
    The MapReduce framework has firmly established itself as one of the most widely used parallel computing platforms for processing big data on tera- and peta-byte scale. Approaching it from a theoretical standpoint has proved to be notoriously difficult, however. In continuation of Goodrich et al.\u27s early efforts, explicitly espousing the goal of putting the MapReduce framework on footing equal to that of long-established models such as the PRAM, we investigate the obvious complexity question of how the computational power of MapReduce algorithms compares to that of combinational Boolean circuits commonly used for parallel computations. Relying on the standard MapReduce model introduced by Karloff et al. a decade ago, we develop an intricate simulation technique to show that any problem in NC (i.e., a problem solved by a logspace-uniform family of Boolean circuits of polynomial size and a depth polylogarithmic in the input size) can be solved by a MapReduce computation in O(T(n)/log n) rounds, where n is the input size and T(n) is the depth of the witnessing circuit family. Thus, we are able to closely relate the standard, uniform NC hierarchy modeling parallel computations to the deterministic MapReduce hierarchy DMRC by proving that NC^{i+1} subseteq DMRC^i for all i in N. Besides the theoretical significance, this result has important applied aspects as well. In particular, we show for all problems in NC^1 - many practically relevant ones, such as integer multiplication and division and the parity function, being among these - how to solve them in a constant number of deterministic MapReduce rounds

    Parallel algorithms and concentration bounds for the Lovasz Local Lemma via witness DAGs

    Full text link
    The Lov\'{a}sz Local Lemma (LLL) is a cornerstone principle in the probabilistic method of combinatorics, and a seminal algorithm of Moser & Tardos (2010) provides an efficient randomized algorithm to implement it. This can be parallelized to give an algorithm that uses polynomially many processors and runs in O(log⁥3n)O(\log^3 n) time on an EREW PRAM, stemming from O(log⁥n)O(\log n) adaptive computations of a maximal independent set (MIS). Chung et al. (2014) developed faster local and parallel algorithms, potentially running in time O(log⁥2n)O(\log^2 n), but these algorithms require more stringent conditions than the LLL. We give a new parallel algorithm that works under essentially the same conditions as the original algorithm of Moser & Tardos but uses only a single MIS computation, thus running in O(log⁥2n)O(\log^2 n) time on an EREW PRAM. This can be derandomized to give an NC algorithm running in time O(log⁥2n)O(\log^2 n) as well, speeding up a previous NC LLL algorithm of Chandrasekaran et al. (2013). We also provide improved and tighter bounds on the run-times of the sequential and parallel resampling-based algorithms originally developed by Moser & Tardos. These apply to any problem instance in which the tighter Shearer LLL criterion is satisfied

    Tight bounds for some problems in computational geometry: the complete sub-logarithmic parallel time range

    Get PDF
    There are a number of fundamental problems in computational geometry for which work-optimal algorithms exist which have a parallel running time of O(log⁥n)O(\log n) in the PRAM model. These include problems like two and three dimensional convex-hulls, trapezoidal decomposition, arrangement construction, dominance among others. Further improvements in running time to sub-logarithmic range were not considered likely because of their close relationship to sorting for which an Ί(log⁥n/log⁥log⁥n)\Omega (\log n/\log\log n ) is known to hold even with a polynomial number of processors. However, with recent progress in padded-sort algorithms, which circumvents the conventional lower-bounds, there arises a natural question about speeding up algorithms for the above-mentioned geometric problems (with appropriate modifications in the output specification). We present randomized parallel algorithms for some fundamental problems like convex-hulls and trapezoidal decomposition which execute in time O(log⁥n/log⁥k)O( \log n/\log k) in an nknk (k>1k > 1) processor CRCW PRAM. Our algorithms do not make any assumptions about the input distribution. Our work relies heavily on results on padded-sorting and some earlier results of Reif and Sen [28, 27]. We further prove a matching lower-bound for these problems in the bounded degree decision tree

    Deterministic parallel algorithms for bilinear objective functions

    Full text link
    Many randomized algorithms can be derandomized efficiently using either the method of conditional expectations or probability spaces with low independence. A series of papers, beginning with work by Luby (1988), showed that in many cases these techniques can be combined to give deterministic parallel (NC) algorithms for a variety of combinatorial optimization problems, with low time- and processor-complexity. We extend and generalize a technique of Luby for efficiently handling bilinear objective functions. One noteworthy application is an NC algorithm for maximal independent set. On a graph GG with mm edges and nn vertices, this takes O~(log⁥2n)\tilde O(\log^2 n) time and (m+n)no(1)(m + n) n^{o(1)} processors, nearly matching the best randomized parallel algorithms. Other applications include reduced processor counts for algorithms of Berger (1997) for maximum acyclic subgraph and Gale-Berlekamp switching games. This bilinear factorization also gives better algorithms for problems involving discrepancy. An important application of this is to automata-fooling probability spaces, which are the basis of a notable derandomization technique of Sivakumar (2002). Our method leads to large reduction in processor complexity for a number of derandomization algorithms based on automata-fooling, including set discrepancy and the Johnson-Lindenstrauss Lemma

    Realizing degree sequences in parallel

    No full text
    A sequence dd of integers is a degree sequence if there exists a (simple) graph GG such that the components of dd are equal to the degrees of the vertices of GG. The graph GG is said to be a realization of dd. We provide an efficient parallel algorithm to realize dd. Before our result, it was not known if the problem of realizing dd is in NCNC

    Randomized Parallel Selection

    Get PDF
    We show that selection on an input of size N can be performed on a P-node hypercube (P = N/(log N)) in time O(n/P) with high probability, provided each node can process all the incident edges in one unit of time (this model is called the parallel model and has been assumed by previous researchers (e.g.,[17])). This result is important in view of a lower bound of Plaxton that implies selection takes Ί((N/P)loglog P+log P) time on a P-node hypercube if each node can process only one edge at a time (this model is referred to as the sequential model)

    Design and analysis of sequential and parallel single-source shortest-paths algorithms

    Get PDF
    We study the performance of algorithms for the Single-Source Shortest-Paths (SSSP) problem on graphs with n nodes and m edges with nonnegative random weights. All previously known SSSP algorithms for directed graphs required superlinear time. Wie give the first SSSP algorithms that provably achieve linear O(n-m)average-case execution time on arbitrary directed graphs with random edge weights. For independent edge weights, the linear-time bound holds with high probability, too. Additionally, our result implies improved average-case bounds for the All-Pairs Shortest-Paths (APSP) problem on sparse graphs, and it yields the first theoretical average-case analysis for the "Approximate Bucket Implementation" of Dijkstra\u27s SSSP algorithm (ABI-Dijkstra). Futhermore, we give constructive proofs for the existence of graph classes with random edge weights on which ABI-Dijkstra and several other well-known SSSP algorithms require superlinear average-case time. Besides the classical sequential (single processor) model of computation we also consider parallel computing: we give the currently fastest average-case linear-work parallel SSSP algorithms for large graph classes with random edge weights, e.g., sparse rondom graphs and graphs modeling the WWW, telephone calls or social networks.In dieser Arbeit untersuchen wir die Laufzeiten von Algorithmen für das Kürzeste-Wege Problem (Single-Source Shortest-Paths, SSSP) auf Graphen mit n Knoten, M Kanten und nichtnegativen zufälligen Kantengewichten. Alle bisherigen SSSP Algorithmen benötigen auf gerichteten Graphen superlineare Zeit. Wir stellen den ersten SSSP Algorithmus vor, der auf beliebigen gerichteten Graphen mit zufälligen Kantengewichten eine beweisbar lineare average-case-Komplexität O(n+m)aufweist. Sind die Kantengewichte unabhängig, so wird die lineare Zeitschranke auch mit hoher Wahrscheinlichkeit eingehalten. Außerdem impliziert unser Ergebnis verbesserte average-case-Schranken für das All-Pairs Shortest-Paths (APSP) Problem auf dünnen Graphen und liefert die erste theoretische average-case-Analyse für die "Approximate Bucket Implementierung" von Dijkstras SSSP Algorithmus (ABI-Dijkstra). Weiterhin führen wir konstruktive Existenzbeweise für Graphklassen mit zufälligen Kantengewichten, auf denen ABI-Dijkstra und mehrere andere bekannte SSSP Algorithmen durchschnittlich superlineare Zeit benötigen. Neben dem klassischen seriellen (Ein-Prozessor) Berechnungsmodell betrachten wir auch Parallelverarbeitung; für umfangreiche Graphklassen mit zufälligen Kantengewichten wie z.B. dünne Zufallsgraphen oder Modelle für das WWW, Telefonanrufe oder soziale Netzwerke stellen wir die derzeit schnellsten parallelen SSSP Algorithmen mit durchschnittlich linearer Arbeit vor

    Efficient parallel computation on multiprocessors with optical interconnection networks

    Get PDF
    This dissertation studies optical interconnection networks, their architecture, address schemes, and computation and communication capabilities. We focus on a simple but powerful optical interconnection network model - the Linear Array with Reconfigurable pipelined Bus System (LARPBS). We extend the LARPBS model to a simplified higher dimensional LAPRBS and provide a set of basic computation operations. We then study the following two groups of parallel computation problems on both one dimensional LARPBS\u27s as well as multi-dimensional LARPBS\u27s: parallel comparison problems, including sorting, merging, and selection; Boolean matrix multiplication, transitive closure and their applications to connected component problems. We implement an optimal sorting algorithm on an n-processor LARPBS. With this optimal sorting algorithm at disposal, we study the sorting problem for higher dimensional LARPBS\u27s and obtain the following results: • An optimal basic Columnsort algorithm on a 2D LARPBS. • Two optimal two-way merge sort algorithms on a 2D LARPBS. • An optimal multi-way merge sorting algorithm on a 2D LARPBS. • An optimal generalized column sort algorithm on a 2D LARPBS. • An optimal generalized column sort algorithm on a 3D LARPBS. • An optimal 5-phase sorting algorithm on a 3D LARPBS. Results for selection problems are as follows: • A constant time maximum-finding algorithm on an LARPBS. • An optimal maximum-finding algorithm on an LARPBS. • An O((log log n)2) time parallel selection algorithm on an LARPBS. • An O(k(log log n)2) time parallel multi-selection algorithm on an LARPBS. While studying the computation and communication properties of the LARPBS model, we find Boolean matrix multiplication and its applications to the graph are another set of problem that can be solved efficiently on the LARPBS. Following is a list of results we have obtained in this area. • A constant time Boolean matrix multiplication algorithm. • An O(log n)-time transitive closure algorithm. • An O(log n)-time connected components algorithm. • An O(log n)-time strongly connected components algorithm. The results provided in this dissertation show the strong computation and communication power of optical interconnection networks

    Work‐Efficient Parallel Union‐Find

    Get PDF
    The incremental graph connectivity (IGC) problem is to maintain a data structure that can quickly answer whether two given vertices in a graph are connected, while allowing more edges to be added to the graph. IGC is a fundamental problem and can be solved efficiently in the sequential setting using a solution to the classical union‐find problem. However, sequential solutions are not sufficient to handle modern‐day large, rapidly‐changing graphs where edge updates arrive at a very high rate. We present the first shared‐memory parallel data structure for union‐find (equivalently, IGC) that is both provably work‐efficient (ie, performs no more work than the best sequential counterpart) and has polylogarithmic parallel depth. We also present a simpler algorithm with slightly worse theoretical properties, but which is easier to implement and has good practical performance. Our experiments on large graph streams with various degree distributions show that it has good practical performance, capable of processing hundreds of millions of edges per second using a 20‐core machine
    • …
    corecore