330 research outputs found

    MaxSAT Evaluation 2018 : Solver and Benchmark Descriptions

    Get PDF
    Non peer reviewe

    Certifying Correctness for Combinatorial Algorithms : by Using Pseudo-Boolean Reasoning

    Get PDF
    Over the last decades, dramatic improvements in combinatorialoptimisation algorithms have significantly impacted artificialintelligence, operations research, and other areas. These advances,however, are achieved through highly sophisticated algorithms that aredifficult to verify and prone to implementation errors that can causeincorrect results. A promising approach to detect wrong results is touse certifying algorithms that produce not only the desired output butalso a certificate or proof of correctness of the output. An externaltool can then verify the proof to determine that the given answer isvalid. In the Boolean satisfiability (SAT) community, this concept iswell established in the form of proof logging, which has become thestandard solution for generating trustworthy outputs. The problem isthat there are still some SAT solving techniques for which prooflogging is challenging and not yet used in practice. Additionally,there are many formalisms more expressive than SAT, such as constraintprogramming, various graph problems and maximum satisfiability(MaxSAT), for which efficient proof logging is out of reach forstate-of-the-art techniques.This work develops a new proof system building on the cutting planesproof system and operating on pseudo-Boolean constraints (0-1 linearinequalities). We explain how such machine-verifiable proofs can becreated for various problems, including parity reasoning, symmetry anddominance breaking, constraint programming, subgraph isomorphism andmaximum common subgraph problems, and pseudo-Boolean problems. Weimplement and evaluate the resulting algorithms and a verifier for theproof format, demonstrating that the approach is practical for a widerange of problems. We are optimistic that the proposed proof system issuitable for designing certifying variants of algorithms inpseudo-Boolean optimisation, MaxSAT and beyond

    Solving hard industrial combinatorial problems with SAT

    Get PDF
    The topic of this thesis is the development of SAT-based techniques and tools for solving industrial combinatorial problems. First, it describes the architecture of state-of-the-art SAT and SMT Solvers based on the classical DPLL procedure. These systems can be used as black boxes for solving combinatorial problems. However, sometimes we can increase their efficiency with slight modifications of the basic algorithm. Therefore, the study and development of techniques for adjusting SAT Solvers to specific combinatorial problems is the first goal of this thesis. Namely, SAT Solvers can only deal with propositional logic. For solving general combinatorial problems, two different approaches are possible: - Reducing the complex constraints into propositional clauses. - Enriching the SAT Solver language. The first approach corresponds to encoding the constraint into SAT. The second one corresponds to using propagators, the basis for SMT Solvers. Regarding the first approach, in this document we improve the encoding of two of the most important combinatorial constraints: cardinality constraints and pseudo-Boolean constraints. After that, we present a new mixed approach, called lazy decomposition, which combines the advantages of encodings and propagators. The other part of the thesis uses these theoretical improvements in industrial combinatorial problems. We give a method for efficiently scheduling some professional sport leagues with SAT. The results are promising and show that a SAT approach is valid for these problems. However, the chaotical behavior of CDCL-based SAT Solvers due to VSIDS heuristics makes it difficult to obtain a similar solution for two similar problems. This may be inconvenient in real-world problems, since a user expects similar solutions when it makes slight modifications to the problem specification. In order to overcome this limitation, we have studied and solved the close solution problem, i.e., the problem of quickly finding a close solution when a similar problem is considered

    SAT-based preimage attacks on SHA-1

    Get PDF
    Hash functions are important cryptographic primitives which map arbitrarily long messages to fixed-length message digests in such a way that: (1) it is easy to compute the message digest given a message, while (2) inverting the hashing process (e.g. finding a message that maps to a specific message digest) is hard. One attack against a hash function is an algorithm that nevertheless manages to invert the hashing process. Hash functions are used in e.g. authentication, digital signatures, and key exchange. A popular hash function used in many practical application scenarios is the Secure Hash Algorithm (SHA-1). In this thesis we investigate the current state of the art in carrying out preimage attacks against SHA-1 using SAT solvers, and we attempt to find out if there is any room for improvement in either the encoding or the solving processes. We run a series of experiments using SAT solvers on encodings of reduced-difficulty versions of SHA-1. Each experiment tests one aspect of the encoding or solving process, such as e.g. determining whether there exists an optimal restart interval or determining which branching heuristic leads to the best average solving time. An important part of our work is to use statistically sound methods, i.e. hypothesis tests which take sample size and variation into account. Our most important result is a new encoding of 32-bit modular addition which significantly reduces the time it takes the SAT solver to find a solution compared to previously known encodings. Other results include the fact that reducing the absolute size of the search space by fixing bits of the message up to a certain point actually results in an instance that is harder for the SAT solver to solve. We have also identified some slight improvements to the parameters used by the heuristics of the solver MiniSat; for example, contrary to assertions made in the literature, we find that using longer restart intervals improves the running time of the solver

    Towards Next Generation Sequential and Parallel SAT Solvers

    Get PDF
    This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology

    Contingent planning under uncertainty via stochastic satisfiability

    Get PDF
    We describe a new planning technique that efficiently solves probabilistic propositional contingent planning problems by converting them into instances of stochastic satisfiability (SSAT) and solving these problems instead. We make fundamental contributions in two areas: the solution of SSAT problems and the solution of stochastic planning problems. This is the first work extending the planning-as-satisfiability paradigm to stochastic domains. Our planner, ZANDER, can solve arbitrary, goal-oriented, finite-horizon partially observable Markov decision processes (POMDPs). An empirical study comparing ZANDER to seven other leading planners shows that its performance is competitive on a range of problems. © 2003 Elsevier Science B.V. All rights reserved

    CDCL(Crypto) and Machine Learning based SAT Solvers for Cryptanalysis

    Get PDF
    Over the last two decades, we have seen a dramatic improvement in the efficiency of conflict-driven clause-learning Boolean satisfiability (CDCL SAT) solvers over industrial problems from a variety of applications such as verification, testing, security, and AI. The availability of such powerful general-purpose search tools as the SAT solver has led many researchers to propose SAT-based methods for cryptanalysis, including techniques for finding collisions in hash functions and breaking symmetric encryption schemes. A feature of all of the previously proposed SAT-based cryptanalysis work is that they are \textit{blackbox}, in the sense that the cryptanalysis problem is encoded as a SAT instance and then a CDCL SAT solver is invoked to solve said instance. A weakness of this approach is that the encoding thus generated may be too large for any modern solver to solve it efficiently. Perhaps a more important weakness of this approach is that the solver is in no way specialized or tuned to solve the given instance. Finally, very little work has been done to leverage parallelism in the context of SAT-based cryptanalysis. To address these issues, we developed a set of methods that improve on the state-of-the-art SAT-based cryptanalysis along three fronts. First, we describe an approach called \cdcl (inspired by the CDCL(TT) paradigm) to tailor the internal subroutines of the CDCL SAT solver with domain-specific knowledge about cryptographic primitives. Specifically, we extend the propagation and conflict analysis subroutines of CDCL solvers with specialized codes that have knowledge about the cryptographic primitive being analyzed by the solver. We demonstrate the power of this framework in two cryptanalysis tasks of algebraic fault attack and differential cryptanalysis of SHA-1 and SHA-256 cryptographic hash functions. Second, we propose a machine-learning based parallel SAT solver that performs well on cryptographic problems relative to many state-of-the-art parallel SAT solvers. Finally, we use a formulation of SAT into Bayesian moment matching to address heuristic initialization problem in SAT solvers

    Planning Graph as a (Dynamic) CSP: Exploiting EBL, DDB and other CSP Search Techniques in Graphplan

    Full text link
    This paper reviews the connections between Graphplan's planning-graph and the dynamic constraint satisfaction problem and motivates the need for adapting CSP search techniques to the Graphplan algorithm. It then describes how explanation based learning, dependency directed backtracking, dynamic variable ordering, forward checking, sticky values and random-restart search strategies can be adapted to Graphplan. Empirical results are provided to demonstrate that these augmentations improve Graphplan's performance significantly (up to 1000x speedups) on several benchmark problems. Special attention is paid to the explanation-based learning and dependency directed backtracking techniques as they are empirically found to be most useful in improving the performance of Graphplan
    • …
    corecore