18,203 research outputs found

    A survey on algorithmic aspects of modular decomposition

    Full text link
    The modular decomposition is a technique that applies but is not restricted to graphs. The notion of module naturally appears in the proofs of many graph theoretical theorems. Computing the modular decomposition tree is an important preprocessing step to solve a large number of combinatorial optimization problems. Since the first polynomial time algorithm in the early 70's, the algorithmic of the modular decomposition has known an important development. This paper survey the ideas and techniques that arose from this line of research

    PT-Scotch: A tool for efficient parallel graph ordering

    Get PDF
    The parallel ordering of large graphs is a difficult problem, because on the one hand minimum degree algorithms do not parallelize well, and on the other hand the obtainment of high quality orderings with the nested dissection algorithm requires efficient graph bipartitioning heuristics, the best sequential implementations of which are also hard to parallelize. This paper presents a set of algorithms, implemented in the PT-Scotch software package, which allows one to order large graphs in parallel, yielding orderings the quality of which is only slightly worse than the one of state-of-the-art sequential algorithms. Our implementation uses the classical nested dissection approach but relies on several novel features to solve the parallel graph bipartitioning problem. Thanks to these improvements, PT-Scotch produces consistently better orderings than ParMeTiS on large numbers of processors

    Non-Local Probes Do Not Help with Graph Problems

    Full text link
    This work bridges the gap between distributed and centralised models of computing in the context of sublinear-time graph algorithms. A priori, typical centralised models of computing (e.g., parallel decision trees or centralised local algorithms) seem to be much more powerful than distributed message-passing algorithms: centralised algorithms can directly probe any part of the input, while in distributed algorithms nodes can only communicate with their immediate neighbours. We show that for a large class of graph problems, this extra freedom does not help centralised algorithms at all: for example, efficient stateless deterministic centralised local algorithms can be simulated with efficient distributed message-passing algorithms. In particular, this enables us to transfer existing lower bound results from distributed algorithms to centralised local algorithms

    Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms

    Full text link
    There has been significant progress in understanding the parallelism inherent to iterative sequential algorithms: for many classic algorithms, the depth of the dependence structure is now well understood, and scheduling techniques have been developed to exploit this shallow dependence structure for efficient parallel implementations. A related, applied research strand has studied methods by which certain iterative task-based algorithms can be efficiently parallelized via relaxed concurrent priority schedulers. These allow for high concurrency when inserting and removing tasks, at the cost of executing superfluous work due to the relaxed semantics of the scheduler. In this work, we take a step towards unifying these two research directions, by showing that there exists a family of relaxed priority schedulers that can efficiently and deterministically execute classic iterative algorithms such as greedy maximal independent set (MIS) and matching. Our primary result shows that, given a randomized scheduler with an expected relaxation factor of kk in terms of the maximum allowed priority inversions on a task, and any graph on nn vertices, the scheduler is able to execute greedy MIS with only an additive factor of poly(kk) expected additional iterations compared to an exact (but not scalable) scheduler. This counter-intuitive result demonstrates that the overhead of relaxation when computing MIS is not dependent on the input size or structure of the input graph. Experimental results show that this overhead can be clearly offset by the gain in performance due to the highly scalable scheduler. In sum, we present an efficient method to deterministically parallelize iterative sequential algorithms, with provable runtime guarantees in terms of the number of executed tasks to completion.Comment: PODC 2018, pages 377-386 in proceeding

    On Conceptually Simple Algorithms for Variants of Online Bipartite Matching

    Full text link
    We present a series of results regarding conceptually simple algorithms for bipartite matching in various online and related models. We first consider a deterministic adversarial model. The best approximation ratio possible for a one-pass deterministic online algorithm is 1/21/2, which is achieved by any greedy algorithm. D\"urr et al. recently presented a 22-pass algorithm called Category-Advice that achieves approximation ratio 3/53/5. We extend their algorithm to multiple passes. We prove the exact approximation ratio for the kk-pass Category-Advice algorithm for all k≥1k \ge 1, and show that the approximation ratio converges to the inverse of the golden ratio 2/(1+5)≈0.6182/(1+\sqrt{5}) \approx 0.618 as kk goes to infinity. The convergence is extremely fast --- the 55-pass Category-Advice algorithm is already within 0.01%0.01\% of the inverse of the golden ratio. We then consider a natural greedy algorithm in the online stochastic IID model---MinDegree. This algorithm is an online version of a well-known and extensively studied offline algorithm MinGreedy. We show that MinDegree cannot achieve an approximation ratio better than 1−1/e1-1/e, which is guaranteed by any consistent greedy algorithm in the known IID model. Finally, following the work in Besser and Poloczek, we depart from an adversarial or stochastic ordering and investigate a natural randomized algorithm (MinRanking) in the priority model. Although the priority model allows the algorithm to choose the input ordering in a general but well defined way, this natural algorithm cannot obtain the approximation of the Ranking algorithm in the ROM model

    Line-distortion, Bandwidth and Path-length of a graph

    Full text link
    We investigate the minimum line-distortion and the minimum bandwidth problems on unweighted graphs and their relations with the minimum length of a Robertson-Seymour's path-decomposition. The length of a path-decomposition of a graph is the largest diameter of a bag in the decomposition. The path-length of a graph is the minimum length over all its path-decompositions. In particular, we show: - if a graph GG can be embedded into the line with distortion kk, then GG admits a Robertson-Seymour's path-decomposition with bags of diameter at most kk in GG; - for every class of graphs with path-length bounded by a constant, there exist an efficient constant-factor approximation algorithm for the minimum line-distortion problem and an efficient constant-factor approximation algorithm for the minimum bandwidth problem; - there is an efficient 2-approximation algorithm for computing the path-length of an arbitrary graph; - AT-free graphs and some intersection families of graphs have path-length at most 2; - for AT-free graphs, there exist a linear time 8-approximation algorithm for the minimum line-distortion problem and a linear time 4-approximation algorithm for the minimum bandwidth problem
    • …
    corecore