45,854 research outputs found

    Faster linearizability checking via PP-compositionality

    Full text link
    Linearizability is a well-established consistency and correctness criterion for concurrent data types. An important feature of linearizability is Herlihy and Wing's locality principle, which says that a concurrent system is linearizable if and only if all of its constituent parts (so-called objects) are linearizable. This paper presents PP-compositionality, which generalizes the idea behind the locality principle to operations on the same concurrent data type. We implement PP-compositionality in a novel linearizability checker. Our experiments with over nine implementations of concurrent sets, including Intel's TBB library, show that our linearizability checker is one order of magnitude faster and/or more space efficient than the state-of-the-art algorithm.Comment: 15 pages, 2 figure

    Scalable Kernelization for Maximum Independent Sets

    Get PDF
    The most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. The kernel can then be solved quickly using exact or heuristic algorithms---or by repeatedly kernelizing recursively in the branch-and-reduce paradigm. It is of critical importance for these algorithms that kernelization is fast and returns a small kernel. Current algorithms are either slow but produce a small kernel, or fast and give a large kernel. We attempt to accomplish both of these goals simultaneously, by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. We combine our parallelization techniques with two techniques to accelerate kernelization further: dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. Our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods, while having a similar execution time. Furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels, but up to two orders of magnitude faster than previously possible. Finally, we show that our kernelization algorithm can be used to accelerate existing state-of-the-art heuristic algorithms, allowing us to find larger independent sets faster on large real-world networks and synthetic instances.Comment: Extended versio

    Solving Parity Games in Scala

    Get PDF
    Parity games are two-player games, played on directed graphs, whose nodes are labeled with priorities. Along a play, the maximal priority occurring infinitely often determines the winner. In the last two decades, a variety of algorithms and successive optimizations have been proposed. The majority of them have been implemented in PGSolver, written in OCaml, which has been elected by the community as the de facto platform to solve efficiently parity games as well as evaluate their performance in several specific cases. PGSolver includes the Zielonka Recursive Algorithm that has been shown to perform better than the others in randomly generated games. However, even for arenas with a few thousand of nodes (especially over dense graphs), it requires minutes to solve the corresponding game. In this paper, we deeply revisit the implementation of the recursive algorithm introducing several improvements and making use of Scala Programming Language. These choices have been proved to be very successful, gaining up to two orders of magnitude in running time

    Proceedings of the 3rd Workshop on Domain-Specific Language Design and Implementation (DSLDI 2015)

    Full text link
    The goal of the DSLDI workshop is to bring together researchers and practitioners interested in sharing ideas on how DSLs should be designed, implemented, supported by tools, and applied in realistic application contexts. We are both interested in discovering how already known domains such as graph processing or machine learning can be best supported by DSLs, but also in exploring new domains that could be targeted by DSLs. More generally, we are interested in building a community that can drive forward the development of modern DSLs. These informal post-proceedings contain the submitted talk abstracts to the 3rd DSLDI workshop (DSLDI'15), and a summary of the panel discussion on Language Composition

    Implicit Simulations using Messaging Protocols

    Full text link
    A novel algorithm for performing parallel, distributed computer simulations on the Internet using IP control messages is introduced. The algorithm employs carefully constructed ICMP packets which enable the required computations to be completed as part of the standard IP communication protocol. After providing a detailed description of the algorithm, experimental applications in the areas of stochastic neural networks and deterministic cellular automata are discussed. As an example of the algorithms potential power, a simulation of a deterministic cellular automaton involving 10^5 Internet connected devices was performed.Comment: 14 pages, 3 figure
    • …
    corecore