391,692 research outputs found

    Simple parallel and distributed algorithms for spectral graph sparsification

    Full text link
    We describe a simple algorithm for spectral graph sparsification, based on iterative computations of weighted spanners and uniform sampling. Leveraging the algorithms of Baswana and Sen for computing spanners, we obtain the first distributed spectral sparsification algorithm. We also obtain a parallel algorithm with improved work and time guarantees. Combining this algorithm with the parallel framework of Peng and Spielman for solving symmetric diagonally dominant linear systems, we get a parallel solver which is much closer to being practical and significantly more efficient in terms of the total work.Comment: replaces "A simple parallel and distributed algorithm for spectral sparsification". Minor change

    Parallel and Distributed Algorithms for the Housing Allocation Problem

    Get PDF
    We give parallel and distributed algorithms for the housing allocation problem. In this problem, there is a set of agents and a set of houses. Each agent has a strict preference list for a subset of houses. We need to find a matching such that some criterion is optimized. One such criterion is Pareto Optimality. A matching is Pareto optimal if no coalition of agents can be strictly better off by exchanging houses among themselves. We also study the housing market problem, a variant of the housing allocation problem, where each agent initially owns a house. In addition to Pareto optimality, we are also interested in finding the core of a housing market. A matching is in the core if there is no coalition of agents that can be better off by breaking away from other agents and switching houses only among themselves. In the first part of this work, we show that computing a Pareto optimal matching of a house allocation is in {\bf CC} and computing the core of a housing market is {\bf CC}-hard. Given a matching, we also show that verifying whether it is in the core can be done in {\bf NC}. We then give an algorithm to show that computing a maximum Pareto optimal matching for the housing allocation problem is in {\bf RNC}^2 and quasi-{\bf NC}^2. In the second part of this work, we present a distributed version of the top trading cycle algorithm for finding the core of a housing market. To that end, we first present two algorithms for finding all the disjoint cycles in a functional graph: a Las Vegas algorithm which terminates in O(logl)O(\log l) rounds with high probability, where ll is the length of the longest cycle, and a deterministic algorithm which terminates in O(lognlogl)O(\log^* n \log l) rounds, where nn is the number of nodes in the graph. Both algorithms work in the synchronous distributed model and use messages of size O(logn)O(\log n)

    Strong Scaling of Matrix Multiplication Algorithms and Memory-Independent Communication Lower Bounds

    Full text link
    A parallel algorithm has perfect strong scaling if its running time on P processors is linear in 1/P, including all communication costs. Distributed-memory parallel algorithms for matrix multiplication with perfect strong scaling have only recently been found. One is based on classical matrix multiplication (Solomonik and Demmel, 2011), and one is based on Strassen's fast matrix multiplication (Ballard, Demmel, Holtz, Lipshitz, and Schwartz, 2012). Both algorithms scale perfectly, but only up to some number of processors where the inter-processor communication no longer scales. We obtain a memory-independent communication cost lower bound on classical and Strassen-based distributed-memory matrix multiplication algorithms. These bounds imply that no classical or Strassen-based parallel matrix multiplication algorithm can strongly scale perfectly beyond the ranges already attained by the two parallel algorithms mentioned above. The memory-independent bounds and the strong scaling bounds generalize to other algorithms.Comment: 4 pages, 1 figur

    Non-Local Probes Do Not Help with Graph Problems

    Full text link
    This work bridges the gap between distributed and centralised models of computing in the context of sublinear-time graph algorithms. A priori, typical centralised models of computing (e.g., parallel decision trees or centralised local algorithms) seem to be much more powerful than distributed message-passing algorithms: centralised algorithms can directly probe any part of the input, while in distributed algorithms nodes can only communicate with their immediate neighbours. We show that for a large class of graph problems, this extra freedom does not help centralised algorithms at all: for example, efficient stateless deterministic centralised local algorithms can be simulated with efficient distributed message-passing algorithms. In particular, this enables us to transfer existing lower bound results from distributed algorithms to centralised local algorithms

    Parallel and distributed Gr\"obner bases computation in JAS

    Full text link
    This paper considers parallel Gr\"obner bases algorithms on distributed memory parallel computers with multi-core compute nodes. We summarize three different Gr\"obner bases implementations: shared memory parallel, pure distributed memory parallel and distributed memory combined with shared memory parallelism. The last algorithm, called distributed hybrid, uses only one control communication channel between the master node and the worker nodes and keeps polynomials in shared memory on a node. The polynomials are transported asynchronous to the control-flow of the algorithm in a separate distributed data structure. The implementation is generic and works for all implemented (exact) fields. We present new performance measurements and discuss the performance of the algorithms.Comment: 14 pages, 8 tables, 13 figure
    corecore