8 research outputs found

    Faster Sorting Networks for 1717, 1919 and 2020 Inputs

    Get PDF
    We present new parallel sorting networks for 1717 to 2020 inputs. For 17,19,17, 19, and 2020 inputs these new networks are faster (i.e., they require less computation steps) than the previously known best networks. Therefore, we improve upon the known upper bounds for minimal depth sorting networks on 17,19,17, 19, and 2020 channels. The networks were obtained using a combination of hand-crafted first layers and a SAT encoding of sorting networks

    The Quest for Optimal Sorting Networks: Efficient Generation of Two-Layer Prefixes

    Full text link
    Previous work identifying depth-optimal nn-channel sorting networks for 9≤n≤169\leq n \leq 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n=40n=40

    Twenty-Five Comparators is Optimal when Sorting Nine Inputs (and Twenty-Nine for Ten)

    Full text link
    This paper describes a computer-assisted non-existence proof of nine-input sorting networks consisting of 24 comparators, hence showing that the 25-comparator sorting network found by Floyd in 1964 is optimal. As a corollary, we obtain that the 29-comparator network found by Waksman in 1969 is optimal when sorting ten inputs. This closes the two smallest open instances of the optimal size sorting network problem, which have been open since the results of Floyd and Knuth from 1966 proving optimality for sorting networks of up to eight inputs. The proof involves a combination of two methodologies: one based on exploiting the abundance of symmetries in sorting networks, and the other, based on an encoding of the problem to that of satisfiability of propositional logic. We illustrate that, while each of these can single handed solve smaller instances of the problem, it is their combination which leads to an efficient solution for nine inputs.Comment: 18 page

    Synchronous Counting and Computational Algorithm Design

    Full text link
    Consider a complete communication network on nn nodes, each of which is a state machine. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are "odd" and which are "even". We require that the solution is self-stabilising (reaching the correct operation from any initial state) and it tolerates ff Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms are expensive to implement in hardware: they require a source of random bits or a large number of states. This work consists of two parts. In the first part, we use computational techniques (often known as synthesis) to construct very compact deterministic algorithms for the first non-trivial case of f=1f = 1. While no algorithm exists for n<4n < 4, we show that as few as 3 states per node are sufficient for all values n≥4n \ge 4. Moreover, the problem cannot be solved with only 2 states per node for n=4n = 4, but there is a 2-state solution for all values n≥6n \ge 6. In the second part, we develop and compare two different approaches for synthesising synchronous counting algorithms. Both approaches are based on casting the synthesis problem as a propositional satisfiability (SAT) problem and employing modern SAT-solvers. The difference lies in how to solve the SAT problem: either in a direct fashion, or incrementally within a counter-example guided abstraction refinement loop. Empirical results suggest that the former technique is more efficient if we want to synthesise time-optimal algorithms, while the latter technique discovers non-optimal algorithms more quickly.Comment: 35 pages, extended and revised versio

    Synchronous Counting and Computational Algorithm Design

    No full text
    Consider a complete communication network on nn nodes, each of which is a state machine. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are "odd" and which are "even". We require that the solution is self-stabilising (reaching the correct operation from any initial state) and it tolerates ff Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms are expensive to implement in hardware: they require a source of random bits or a large number of states. This work consists of two parts. In the first part, we use computational techniques (often known as synthesis) to construct very compact deterministic algorithms for the first non-trivial case of f=1f = 1. While no algorithm exists for n<4n < 4, we show that as few as 3 states per node are sufficient for all values n≥4n \ge 4. Moreover, the problem cannot be solved with only 2 states per node for n=4n = 4, but there is a 2-state solution for all values n≥6n \ge 6. In the second part, we develop and compare two different approaches for synthesising synchronous counting algorithms. Both approaches are based on casting the synthesis problem as a propositional satisfiability (SAT) problem and employing modern SAT-solvers. The difference lies in how to solve the SAT problem: either in a direct fashion, or incrementally within a counter-example guided abstraction refinement loop. Empirical results suggest that the former technique is more efficient if we want to synthesise time-optimal algorithms, while the latter technique discovers non-optimal algorithms more quickly
    corecore