964 research outputs found

    Certified Roundoff Error Bounds Using Semidefinite Programming.

    Get PDF
    Roundoff errors cannot be avoided when implementing numerical programs with finite precision. The ability to reason about rounding is especially important if one wants to explore a range of potential representations, for instance for FPGAs or custom hardware implementation. This problem becomes challenging when the program does not employ solely linear operations as non-linearities are inherent to many interesting computational problems in real-world applications. Existing solutions to reasoning are limited in the presence of nonlinear correlations between variables, leading to either imprecise bounds or high analysis time. Furthermore, while it is easy to implement a straightforward method such as interval arithmetic, sophisticated techniques are less straightforward to implement in a formal setting. Thus there is a need for methods which output certificates that can be formally validated inside a proof assistant. We present a framework to provide upper bounds on absolute roundoff errors. This framework is based on optimization techniques employing semidefinite programming and sums of squares certificates, which can be formally checked inside the Coq theorem prover. Our tool covers a wide range of nonlinear programs, including polynomials and transcendental operations as well as conditional statements. We illustrate the efficiency and precision of this tool on non-trivial programs coming from biology, optimization and space control. Our tool produces more precise error bounds for 37 percent of all programs and yields better performance in 73 percent of all programs

    SAT and CP: Parallelisation and Applications

    Get PDF
    This thesis is considered with the parallelisation of solvers which search for either an arbitrary, or an optimum, solution to a problem stated in some formal way. We discuss the parallelisation of two solvers, and their application in three chapters.In the first chapter, we consider SAT, the decision problem of propositional logic, and algorithms for showing the satisfiability or unsatisfiability of propositional formulas. We sketch some proof-theoretic foundations which are related to the strength of different algorithmic approaches. Furthermore, we discuss details of the implementations of SAT solvers, and show how to improve upon existing sequential solvers. Lastly, we discuss the parallelisation of these solvers with a focus on clause exchange, the communication of intermediate results within a parallel solver. The second chapter is concerned with Contraint Programing (CP) with learning. Contrary to classical Constraint Programming techniques, this incorporates learning mechanisms as they are used in the field of SAT solving. We present results from parallelising CHUFFED, a learning CP solver. As this is both a kind of CP and SAT solver, it is not clear which parallelisation approaches work best here. In the final chapter, we will discuss Sorting networks, which are data oblivious sorting algorithms, i. e., the comparisons they perform do not depend on the input data. Their independence of the input data lends them to parallel implementation. We consider the question how many parallel sorting steps are needed to sort some inputs, and present both lower and upper bounds for several cases

    Reasons for Hardness in QBF Proof Complexity

    Get PDF
    Quantified Boolean Formulas (QBF) extend the canonical NP-complete satisfiability problem by including Boolean quantifiers. Determining the truth of a QBF is PSPACE-complete; this is expected to be a harder problem than satisfiability, and hence QBF solving has much wider applications in practice. QBF proof complexity forms the theoretical basis for understanding QBF solving, as well as providing insights into more general complexity theory, but is less well understood than propositional proof complexity. We begin this thesis by looking at the reasons underlying QBF hardness, and in particular when the hardness is propositional in nature, rather than arising due to the quantifiers. We introduce relaxing QU-Res, a previous model for identifying such propositional hardness, and construct an example where relaxing QU-Res is unsuccessful in this regard. We then provide a new model for identifying such hardness which we prove captures this concept. Now equipped with a means of identifying ‘genuine’ QBF hardness, we prove a new lower bound technique for tree-like QBF proof systems. Lower bounds using this technique allows us to show a new separation between tree-like and dag-like systems. We give a characterisation of lower bounds for a large class of tree-like proof systems, in which such lower bounds play a prominent role. Further to the tree-like bound, we provide a new lower bound technique for QBF proof systems in general. This technique has some similarities to the above technique for tree-like systems, but requires some refinement to provide bounds for dag-like systems. We give applications of this new technique by proving lower bounds across several systems. The first such lower bounds are for a very simple family of QBFs. We then provide a construction to combine false QBFs to give formulas for which we can show lower bounds in this way, allowing the generation of the first random QBF proof complexity lower bounds

    Quantified Boolean Formulas: Proof Complexity and Models of Solving

    Get PDF
    Quantified Boolean formulas (QBF), which form the canonical PSPACE-complete decision problem, are a decidable fragment of first-order logic. Any problem that can be solved within a polynomial-size space can be encoded succinctly as a QBF, including many concrete problems in computer science from domains such as verification, synthesis and planning. Automated solvers for QBF are now reaching the point of industrial applicability. In this thesis, we focus on dependency awareness, a dedicated solving paradigm for QBF. We show that dependency schemes can be envisaged in terms of dependency quantified Boolean formulas (DQBF), exposing strong connections between these two previously disparate entities. By introducing new lower-bound techniques for QBF proof systems, we study the relative strengths of models of dependency-aware solving, including the proposal of new, stronger models. Proof Complexity: Using the strategy extraction paradigm, we introduce new lower-bound techniques that apply to resolution-based QBF proof systems. In particular, we use the technique to prove exponential lower bounds for a new family of QBFs called the equality formulas. Our technique also affords considerably simpler, more intuitive proofs of some existing QBF proof-size lower bounds. Models of Solving: We apply our lower bound techniques to show new separations for QBF proof systems parametrised by dependency schemes. We also propose new models of dynamic dependency-aware solving and prove that they are exponentially stronger than the existing static models. Finally, we introduce Merge Resolution, a proof system modelling CDCL-style solving for DQBF, which is the first of its kind

    Hard QBFs for Merge Resolution

    Get PDF
    We prove the first proof size lower bounds for the proof system Merge Resolution (MRes [Olaf Beyersdorff et al., 2020]), a refutational proof system for prenex quantified Boolean formulas (QBF) with a CNF matrix. Unlike most QBF resolution systems in the literature, proofs in MRes consist of resolution steps together with information on countermodels, which are syntactically stored in the proofs as merge maps. As demonstrated in [Olaf Beyersdorff et al., 2020], this makes MRes quite powerful: it has strategy extraction by design and allows short proofs for formulas which are hard for classical QBF resolution systems. Here we show the first exponential lower bounds for MRes, thereby uncovering limitations of MRes. Technically, the results are either transferred from bounds from circuit complexity (for restricted versions of MRes) or directly obtained by combinatorial arguments (for full MRes). Our results imply that the MRes approach is largely orthogonal to other QBF resolution models such as the QCDCL resolution systems QRes and QURes and the expansion systems ?Exp+Res and IR

    Controlled and effective interpolation

    Get PDF
    Model checking is a well established technique to verify systems, exhaustively and automatically. The state space explosion, known as the main difficulty in model checking scalability, has been successfully approached by symbolic model checking which represents programs using logic, usually at the propositional or first order theories level. Craig interpolation is one of the most successful abstraction techniques used in symbolic methods. Interpolants can be efficiently generated from proofs of unsatisfiability, and have been used as means of over-approximation to generate inductive invariants, refinement predicates, and function summaries. However, interpolation is still not fully understood. For several theories it is only possible to generate one interpolant, giving the interpolation-based application no chance of further optimization via interpolation. For the theories that have interpolation systems that are able to generate different interpolants, it is not understood what makes one interpolant better than another, and how to generate the most suitable ones for a particular verification task. The goal of this thesis is to address the problems of how to generate multiple interpolants for theories that still lack this flexibility in their interpolation algorithms, and how to aim at good interpolants. This thesis extends the state-of-the-art by introducing novel interpolation frameworks for different theories. For propositional logic, this work provides a thorough theoretical analysis showing which properties are desirable in a labeling function for the Labeled Interpolation Systems framework (LIS). The Proof-Sensitive labeling function is presented, and we prove that it generates interpolants with the smallest number of Boolean connectives in the entire LIS framework. Two variants that aim at controlling the logical strength of propositional interpolants while maintaining a small size are given. The new interpolation algorithms are compared to previous ones from the literature in different model checking settings, showing that they consistently lead to a better overall verification performance. The Equalities and Uninterpreted Functions (EUF)-interpolation system, presented in this thesis, is a duality-based interpolation framework capable of generating multiple interpolants for a single proof of unsatisfiability, and provides control over the logical strength of the interpolants it generates using labeling functions. The labeling functions can be theoretically compared with respect to their strength, and we prove that two of them generate the interpolants with the smallest number of equalities. Our experiments follow the theory, showing that the generated interpolants indeed have different logical strength. We combine propositional and EUF interpolation in a model checking setting, and show that the strength of the interpolation algorithms for different theories has to be aligned in order to generate smaller interpolants. This work also introduces the Linear Real Arithmetic (LRA)-interpolation system, an interpolation framework for LRA. The framework is able to generate infinitely many interpolants of different logical strength using the duality of interpolants. The strength of the LRA interpolants can be controlled by a normalized strength factor, which makes it straightforward for an interpolationbased application to choose the level of strength it wants for the interpolants. Our experiments with the LRA-interpolation system and a model checker show that it is very important for the application to be able to fine tune the strength of the LRA interpolants in order to achieve optimal performance. The interpolation frameworks were implemented and form the interpolation module in OpenSMT2, an open source efficient SMT solver. OpenSMT2 has been integrated to the propositional interpolation-based model checkers FunFrog and eVolCheck, and to the first order interpolation-based model checkerHiFrog. This thesis presents real life model checking experiments using the novel interpolation frameworks and the tools aforementioned, showing the viability and strengths of the techniques

    Even shorter proofs without new variables

    Full text link
    Proof formats for SAT solvers have diversified over the last decade, enabling new features such as extended resolution-like capabilities, very general extension-free rules, inclusion of proof hints, and pseudo-boolean reasoning. Interference-based methods have been proven effective, and some theoretical work has been undertaken to better explain their limits and semantics. In this work, we combine the subsumption redundancy notion from (Buss, Thapen 2019) and the overwrite logic framework from (Rebola-Pardo, Suda 2018). Natural generalizations then become apparent, enabling even shorter proofs of the pigeonhole principle (compared to those from (Heule, Kiesl, Biere 2017)) and smaller unsatisfiable core generation.Comment: 21 page

    Even Shorter Proofs Without New Variables

    Get PDF

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Processes and continuous change in a SAT-based planner

    Get PDF
    AbstractThe TM-LPSAT planner can construct plans in domains containing atomic actions and durative actions; events and processes; discrete, real-valued, and interval-valued fluents; reusable resources, both numeric and interval-valued; and continuous linear change to quantities. It works in three stages. In the first stage, a representation of the domain and problem in an extended version of PDDL+ is compiled into a system of Boolean combinations of propositional atoms and linear constraints over numeric variables. In the second stage, a SAT-based arithmetic constraint solver, such as LPSAT or MathSAT, is used to find a solution to the system of constraints. In the third stage, a correct plan is extracted from this solution. We discuss the structure of the planner and show how planning with time and metric quantities is compiled into a system of constraints. The proofs of soundness and completeness over a substantial subset of our extended version of PDDL+ are presented
    • …
    corecore