44 research outputs found

    Even Shorter Proofs Without New Variables

    Get PDF

    Even shorter proofs without new variables

    Full text link
    Proof formats for SAT solvers have diversified over the last decade, enabling new features such as extended resolution-like capabilities, very general extension-free rules, inclusion of proof hints, and pseudo-boolean reasoning. Interference-based methods have been proven effective, and some theoretical work has been undertaken to better explain their limits and semantics. In this work, we combine the subsumption redundancy notion from (Buss, Thapen 2019) and the overwrite logic framework from (Rebola-Pardo, Suda 2018). Natural generalizations then become apparent, enabling even shorter proofs of the pigeonhole principle (compared to those from (Heule, Kiesl, Biere 2017)) and smaller unsatisfiable core generation.Comment: 21 page

    Recognition and Exploitation of Gate Structure in SAT Solving

    Get PDF
    In der theoretischen Informatik ist das SAT-Problem der archetypische Vertreter der Klasse der NP-vollständigen Probleme, weshalb effizientes SAT-Solving im Allgemeinen als unmöglich angesehen wird. Dennoch erzielt man in der Praxis oft erstaunliche Resultate, wo einige Anwendungen Probleme mit Millionen von Variablen erzeugen, die von neueren SAT-Solvern in angemessener Zeit gelöst werden können. Der Erfolg von SAT-Solving in der Praxis ist auf aktuelle Implementierungen des Conflict Driven Clause-Learning (CDCL) Algorithmus zurückzuführen, dessen Leistungsfähigkeit weitgehend von den verwendeten Heuristiken abhängt, welche implizit die Struktur der in der industriellen Praxis erzeugten Instanzen ausnutzen. In dieser Arbeit stellen wir einen neuen generischen Algorithmus zur effizienten Erkennung der Gate-Struktur in CNF-Encodings von SAT Instanzen vor, und außerdem drei Ansätze, in denen wir diese Struktur explizit ausnutzen. Unsere Beiträge umfassen auch die Implementierung dieser Ansätze in unserem SAT-Solver Candy und die Entwicklung eines Werkzeugs für die verteilte Verwaltung von Benchmark-Instanzen und deren Attribute, der Global Benchmark Database (GBD)

    Verified Propagation Redundancy and Compositional UNSAT Checking in CakeML

    Get PDF
    Modern SAT solvers can emit independently-checkable proof certificates to validate their results. The state-of-the-art proof system that allows for compact proof certificates is propagation redundancy (PR). However, the only existing method to validate proofs in this system with a formally verified tool requires a transformation to a weaker proof system, which can result in a significant blowup in the size of the proof and increased proof validation time. This article describes the first approach to formally verify PR proofs on a succinct representation. We present (i) a new Linear PR (LPR) proof format, (ii) an extension of the DPR-trim tool to efficiently convert PR proofs into LPR format, and (iii) cake_lpr, a verified LPR proof checker developed in CakeML. We also enhance these tools with (iv) a new compositional proof format designed to enable separate (parallel) proof checking. The LPR format is backwards compatible with the existing LRAT format, but extends LRAT with support for the addition of PR clauses. Moreover, cake_lpr is verified using CakeML ’s binary code extraction toolchain, which yields correctness guarantees for its machine code (binary) implementation. This further distinguishes our clausal proof checker from existing checkers because unverified extraction and compilation tools are removed from its trusted computing base. We experimentally show that: LPR provides efficiency gains over existing proof formats; cake_lpr ’s strong correctness guarantees are obtained without significant sacrifice in its performance; and the compositional proof format enables scalable parallel proof checking for large proofs

    Towards Next Generation Sequential and Parallel SAT Solvers

    Get PDF
    This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology

    Design, implementation and evaluation of a distributed CDCL framework

    Get PDF
    The primary subject of this dissertation is practically solving instances of the Boolean satisfiability problem (SAT) that arise from industrial applications. The invention of the conflict-driven clause-learning (CDCL) algorithm led to enormous progress in this field. CDCL has been augmented with effective pre- and inprocessing techniques that boost its effectiveness. While a considerable amount of work has been done on applying shared-memory parallelism to enhance the performance of CDCL, solving SAT on distributed architectures is studied less thoroughly. In this work, we develop a distributed, CDCL-based framework for SAT solving. This framework consists of three main components: 1. An implementation of the CDCL algorithm that we have written from scratch, 2. a novel, parallel SAT algorithm that builds upon this CDCL implementation and 3. a collection of parallel simplification techniques for SAT instances. We call our resulting framework satUZK; our parallel solving algorithm is called the distributed divide-and-conquer (DDC) algorithm. The DDC algorithm employs a parallel lookahead procedure to dynamically partition the search space. Load balancing is used to ensure that all computational resources are utilized during lookahead. This procedure results in a divide-and-conquer tree that is distributed over all processors. Individual threads are routed through this tree until they arrive at unsolved leaf vertices. Upon arrival, the lookahead procedure is invoked again or the leaf vertex is solved via CDCL. Several extensions to the DDC algorithm are proposed. These include clause sharing and a scheme to locally adjust the LBD score relative to the current search tree vertex. LBD is a measure for the usefulness of clauses that participate in a CDCL search. We evaluate our DDC algorithm empirically and benchmark it against the best distributed SAT algorithms. In this experiment, our DDC algorithm is faster than other distributed, state-of-the-art solvers and solves at least as many instances. In addition to running a parallel algorithm for SAT solving we also consider parallel simplifcation. Here, we first develop a theoretical foundation that allows us to prove the correctness of parallel simplification techniques. Using this as a basis, we examine established simplification algorithms for their parallelizability. It turns out that several well-known simplification techniques can be parallelized efficiently. We provide parallel implementation of the techniques and test their effectiveness in empirical experiments. This evaluation finds several combinations of simplification techniques that can solve instances which could not be solved by the DDC algorithm alone

    Improving Model Finding for Integrated Quantitative-qualitative Spatial Reasoning With First-order Logic Ontologies

    Get PDF
    Many spatial standards are developed to harmonize the semantics and specifications of GIS data and for sophisticated reasoning. All these standards include some types of simple and complex geometric features, and some of them incorporate simple mereotopological relations. But the relations as used in these standards, only allow the extraction of qualitative information from geometric data and lack formal semantics that link geometric representations with mereotopological or other qualitative relations. This impedes integrated reasoning over qualitative data obtained from geometric sources and “native” topological information – for example as provided from textual sources where precise locations or spatial extents are unknown or unknowable. To address this issue, the first contribution in this dissertation is a first-order logical ontology that treats geometric features (e.g. polylines, polygons) and relations between them as specializations of more general types of features (e.g. any kind of 2D or 1D features) and mereotopological relations between them. Key to this endeavor is the use of a multidimensional theory of space wherein, unlike traditional logical theories of mereotopology (like RCC), spatial entities of different dimensions can co-exist and be related. However terminating or tractable reasoning with such an expressive ontology and potentially large amounts of data is a challenging AI problem. Model finding tools used to verify FOL ontologies with data usually employ a SAT solver to determine the satisfiability of the propositional instantiations (SAT problems) of the ontology. These solvers often experience scalability issues with increasing number of objects and size and complexity of the ontology, limiting its use to ontologies with small signatures and building small models with less than 20 objects. To investigate how an ontology influences the size of its SAT translation and consequently the model finder’s performance, we develop a formalization of FOL ontologies with data. We theoretically identify parameters of an ontology that significantly contribute to the dramatic growth in size of the SAT problem. The search space of the SAT problem is exponential in the signature of the ontology (the number of predicates in the axiomatization and any additional predicates from skolemization) and the number of distinct objects in the model. Axiomatizations that contain many definitions lead to large number of SAT propositional clauses. This is from the conversion of biconditionals to clausal form. We therefore postulate that optional definitions are ideal sentences that can be eliminated from an ontology to boost model finder’s performance. We then formalize optional definition elimination (ODE) as an FOL ontology preprocessing step and test the simplification on a set of spatial benchmark problems to generate smaller SAT problems (with fewer clauses and variables) without changing the satisfiability and semantic meaning of the problem. We experimentally demonstrate that the reduction in SAT problem size also leads to improved model finding with state-of-the-art model finders, with speedups of 10-99%. Altogether, this dissertation improves spatial reasoning capabilities using FOL ontologies – in terms of a formal framework for integrated qualitative-geometric reasoning, and specific ontology preprocessing steps that can be built into automated reasoners to achieve better speedups in model finding times, and scalability with moderately-sized datasets

    Proceedings of SAT Competition 2014 : Solver and Benchmark Descriptions

    Get PDF

    Proceedings of SAT Competition 2014 : Solver and Benchmark Descriptions

    Get PDF
    Peer reviewe

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
    corecore