426,759 research outputs found

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    Shared-Memory Parallel Maximal Clique Enumeration

    Get PDF
    We present shared-memory parallel methods for Maximal Clique Enumeration (MCE) from a graph. MCE is a fundamental and well-studied graph analytics task, and is a widely used primitive for identifying dense structures in a graph. Due to its computationally intensive nature, parallel methods are imperative for dealing with large graphs. However, surprisingly, there do not yet exist scalable and parallel methods for MCE on a shared-memory parallel machine. In this work, we present efficient shared-memory parallel algorithms for MCE, with the following properties: (1) the parallel algorithms are provably work-efficient relative to a state-of-the-art sequential algorithm (2) the algorithms have a provably small parallel depth, showing that they can scale to a large number of processors, and (3) our implementations on a multicore machine shows a good speedup and scaling behavior with increasing number of cores, and are substantially faster than prior shared-memory parallel algorithms for MCE.Comment: 10 pages, 3 figures, proceedings of the 25th IEEE International Conference on. High Performance Computing, Data, and Analytics (HiPC), 201

    Space-Efficient Parallel Algorithms for Combinatorial Search Problems

    Get PDF
    We present space-efficient parallel strategies for two fundamental combinatorial search problems, namely, backtrack search and branch-and-bound, both involving the visit of an nn-node tree of height hh under the assumption that a node can be accessed only through its father or its children. For both problems we propose efficient algorithms that run on a pp-processor distributed-memory machine. For backtrack search, we give a deterministic algorithm running in O(n/p+hlogp)O(n/p+h\log p) time, and a Las Vegas algorithm requiring optimal O(n/p+h)O(n/p+h) time, with high probability. Building on the backtrack search algorithm, we also derive a Las Vegas algorithm for branch-and-bound which runs in O((n/p+hlogplogn)hlog2n)O((n/p+h\log p \log n)h\log^2 n) time, with high probability. A remarkable feature of our algorithms is the use of only constant space per processor, which constitutes a significant improvement upon previous algorithms whose space requirements per processor depend on the (possibly huge) tree to be explored.Comment: Extended version of the paper in the Proc. of 38th International Symposium on Mathematical Foundations of Computer Science (MFCS

    A Parallel Riccati Factorization Algorithm with Applications to Model Predictive Control

    Full text link
    Model Predictive Control (MPC) is increasing in popularity in industry as more efficient algorithms for solving the related optimization problem are developed. The main computational bottle-neck in on-line MPC is often the computation of the search step direction, i.e. the Newton step, which is often done using generic sparsity exploiting algorithms or Riccati recursions. However, as parallel hardware is becoming increasingly popular the demand for efficient parallel algorithms for solving the Newton step is increasing. In this paper a tailored, non-iterative parallel algorithm for computing the Riccati factorization is presented. The algorithm exploits the special structure in the MPC problem, and when sufficiently many processing units are available, the complexity of the algorithm scales logarithmically in the prediction horizon. Computing the Newton step is the main computational bottle-neck in many MPC algorithms and the algorithm can significantly reduce the computation cost for popular state-of-the-art MPC algorithms

    Parallel Weighted Random Sampling

    Get PDF
    Data structures for efficient sampling from a set of weighted items are an important building block of many applications. However, few parallel solutions are known. We close many of these gaps both for shared-memory and distributed-memory machines. We give efficient, fast, and practicable algorithms for sampling single items, k items with/without replacement, permutations, subsets, and reservoirs. We also give improved sequential algorithms for alias table construction and for sampling with replacement. Experiments on shared-memory parallel machines with up to 158 threads show near linear speedups both for construction and queries
    corecore