9 research outputs found

    Typical-case complexity and the SAT competitions

    Get PDF
    The aim of this paper is to gather insight into typical-case complexity of the Boolean Satisfiability (SAT) problem by mining the data from the SAT competitions. Specifically, the statistical properties of the SAT benchmarks and their impact on complexity are investigated, as well as connections between different metrics of complexity. While some of the investigated properties and relationships are “folklore” in the SAT community, this study aims at scientifically showing what is true from the folklore and what is not

    Active Learning for SAT Solver Benchmarking

    Get PDF
    Benchmarking is a crucial phase when developing algorithms. This also applies to solvers for the SAT (propositional satisfiability) problem. Benchmark selection is about choosing representative problem instances that reliably discriminate solvers based on their runtime. In this paper, we present a dynamic benchmark selection approach based on active learning. Our approach predicts the rank of a new solver among its competitors with minimum runtime and maximum rank prediction accuracy. We evaluated this approach on the Anniversary Track dataset from the 2022 SAT Competition. Our selection approach can predict the rank of a new solver after about 10 % of the time it would take to run the solver on all instances of this dataset, with a prediction accuracy of about 92 %. We also discuss the importance of instance families in the selection process. Overall, our tool provides a reliable way for solver engineers to determine a new solver’s performance efficiently

    Why CP Portfolio Solvers Are (under)Utilized? Issues and Challenges

    Get PDF
    International audienceIt is well recognized that a single, arbitrarily efficient solver can be significantly outperformed by a portfolio solver exploiting a combination of possibly slower on-average different solvers. Despite the success of portfolio solvers within the context of solving competitions, they are rarely used in practice. In this paper we give an overview of the main limitations that hinder the practical adoption and development of portfolio solvers within the Constraint Programming (CP) paradigm, discussing also possible ways to overcome them and potential extensions outside the CP field

    SUNNY-CP : a Sequential CP Portfolio Solver

    Get PDF
    International audienceThe Constraint Programming (CP) paradigm allows to model and solve Constraint Satisfaction / Optimization Problems (CSPs / COPs). A CP Portfolio Solver is a particular constraint solver that takes advantage of a portfolio of different CP solvers in order to solve a given problem by properly exploiting Algorithm Selection techniques. In this work we present sunny-cp: a CP portfolio for solving both CSPs and COPs that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers

    SAT-based preimage attacks on SHA-1

    Get PDF
    Hash functions are important cryptographic primitives which map arbitrarily long messages to fixed-length message digests in such a way that: (1) it is easy to compute the message digest given a message, while (2) inverting the hashing process (e.g. finding a message that maps to a specific message digest) is hard. One attack against a hash function is an algorithm that nevertheless manages to invert the hashing process. Hash functions are used in e.g. authentication, digital signatures, and key exchange. A popular hash function used in many practical application scenarios is the Secure Hash Algorithm (SHA-1). In this thesis we investigate the current state of the art in carrying out preimage attacks against SHA-1 using SAT solvers, and we attempt to find out if there is any room for improvement in either the encoding or the solving processes. We run a series of experiments using SAT solvers on encodings of reduced-difficulty versions of SHA-1. Each experiment tests one aspect of the encoding or solving process, such as e.g. determining whether there exists an optimal restart interval or determining which branching heuristic leads to the best average solving time. An important part of our work is to use statistically sound methods, i.e. hypothesis tests which take sample size and variation into account. Our most important result is a new encoding of 32-bit modular addition which significantly reduces the time it takes the SAT solver to find a solution compared to previously known encodings. Other results include the fact that reducing the absolute size of the search space by fixing bits of the message up to a certain point actually results in an instance that is harder for the SAT solver to solve. We have also identified some slight improvements to the parameters used by the heuristics of the solver MiniSat; for example, contrary to assertions made in the literature, we find that using longer restart intervals improves the running time of the solver

    The Schulze Method of Voting

    Full text link
    We propose a new single-winner election method ("Schulze method") and prove that it satisfies many academic criteria (e.g. monotonicity, reversal symmetry, resolvability, independence of clones, Condorcet criterion, k-consistency, polynomial runtime). We then generalize this method to proportional representation by the single transferable vote ("Schulze STV") and to methods to calculate a proportional ranking ("Schulze proportional ranking"). Furthermore, we propose a generalization of the Condorcet criterion to multi-winner elections. This paper contains a large number of examples to illustrate the proposed methods

    Towards Next Generation Sequential and Parallel SAT Solvers

    Get PDF
    This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology
    corecore