89 research outputs found

    Symbolic reactive synthesis

    Get PDF
    In this thesis, we develop symbolic algorithms for the synthesis of reactive systems. Synthesis, that is the task of deriving correct-by-construction implementations from formal specifications, has the potential to eliminate the need for the manual—and error-prone—programming task. The synthesis problem can be formulated as an infinite two-player game, where the system player has the objective to satisfy the specification against all possible actions of the environment player. The standard synthesis algorithms represent the underlying synthesis game explicitly and, thus, they scale poorly with respect to the size of the specification. We provide an algorithmic framework to solve the synthesis problem symbolically. In contrast to the standard approaches, we use a succinct representation of the synthesis game which leads to improved scalability in terms of the symbolically represented parameters. Our algorithm reduces the synthesis game to the satisfiability problem of quantified Boolean formulas (QBF) and dependency quantified Boolean formulas (DQBF). In the encodings, we use propositional quantification to succinctly represent different parts of the implementation, such as the state space and the transition function. We develop highly optimized satisfiability algorithms for QBF and DQBF. Based on a counterexample-guided abstraction refinement (CEGAR) loop, our algorithms avoid an exponential blow-up by using the structure of the underlying symbolic encodings. Further, we extend the solving algorithms to extract certificates in the form of Boolean functions, from which we construct implementations for the synthesis problem. Our empirical evaluation shows that our symbolic approach significantly outperforms previous explicit synthesis algorithms with respect to scalability and solution quality.In dieser Dissertation werden symbolische Algorithmen für die Synthese von reaktiven Systemen entwickelt. Synthese, d.h. die Aufgabe, aus formalen Spezifikationen korrekte Implementierungen abzuleiten, hat das Potenzial, die manuelle und fehleranfällige Programmierung überflüssig zu machen. Das Syntheseproblem kann als unendliches Zweispielerspiel verstanden werden, bei dem der Systemspieler das Ziel hat, die Spezifikation gegen alle möglichen Handlungen des Umgebungsspielers zu erfüllen. Die Standardsynthesealgorithmen stellen das zugrunde liegende Synthesespiel explizit dar und skalieren daher schlecht in Bezug auf die Größe der Spezifikation. Diese Arbeit präsentiert einen algorithmischen Ansatz, der das Syntheseproblem symbolisch löst. Im Gegensatz zu den Standardansätzen wird eine kompakte Darstellung des Synthesespiels verwendet, die zu einer verbesserten Skalierbarkeit der symbolisch dargestellten Parameter führt. Der Algorithmus reduziert das Synthesespiel auf das Erfüllbarkeitsproblem von quantifizierten booleschen Formeln (QBF) und abhängigkeitsquantifizierten booleschen Formeln (DQBF). In den Kodierungen verwenden wir propositionale Quantifizierung, um verschiedene Teile der Implementierung, wie den Zustandsraum und die Übergangsfunktion, kompakt darzustellen. Wir entwickeln hochoptimierte Erfüllbarkeitsalgorithmen für QBF und DQBF. Basierend auf einer gegenbeispielgeführten Abstraktionsverfeinerungsschleife (CEGAR) vermeiden diese Algorithmen ein exponentielles Blow-up, indem sie die Struktur der zugrunde liegenden symbolischen Kodierungen verwenden. Weiterhin werden die Lösungsalgorithmen um Zertifikate in Form von booleschen Funktionen erweitert, aus denen Implementierungen für das Syntheseproblem abgeleitet werden. Unsere empirische Auswertung zeigt, dass unser symbolischer Ansatz die bisherigen expliziten Synthesealgorithmen in Bezug auf Skalierbarkeit und Lösungsqualität deutlich übertrifft

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field

    Query Answering in Probabilistic Data and Knowledge Bases

    Get PDF
    Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Efficient local search for Pseudo Boolean Optimization

    Get PDF
    Algorithms and the Foundations of Software technolog

    Building Logic Toolboxes

    Get PDF

    Proof Checking and Logic Programming

    Get PDF
    International audienceIn a world where trusting software systems is increasingly important, formal methods and formal proof can help provide trustable foundations. Proof checking can help to reduce the size of the trusted base since we do not need to trust an entire theorem prover if we can check the proofs they produce by a trusted (and smaller) checker. Many approaches to building proof checkers require embedding within them a full programming language. In most many modern proof checkers and theorem provers, that programming language is a functional programming language, often a variant of ML. In fact, parts of ML (e.g., strong typing , abstract datatypes, and higher-order programming) were designed to make ML into a trustworthy " metalanguage " for checking proofs. While there is considerable overlap in the foundations of logic programming and proof checking (both benefit from unification, backtracking search, efficient term structures, etc), the discipline of logic programming has, in fact, played a minor role in the history of proof checking. I will argue that logic programming can have a major role in the future of this important topic

    Disproving in First-Order Logic with Definitions, Arithmetic and Finite Domains

    Get PDF
    This thesis explores several methods which enable a first-order reasoner to conclude satisfiability of a formula modulo an arithmetic theory. The most general method requires restricting certain quantifiers to range over finite sets; such assumptions are common in the software verification setting. In addition, the use of first-order reasoning allows for an implicit representation of those finite sets, which can avoid scalability problems that affect other quantified reasoning methods. These new techniques form a useful complement to existing methods that are primarily aimed at proving validity. The Superposition calculus for hierarchic theory combinations provides a basis for reasoning modulo theories in a first-order setting. The recent account of ‘weak abstraction’ and related improvements make an mplementation of the calculus practical. Also, for several logical theories of interest Superposition is an effective decision procedure for the quantifier free fragment. The first contribution is an implementation of that calculus (Beagle), including an optimized implementation of Cooper’s algorithm for quantifier elimination in the theory of linear integer arithmetic. This includes a novel means of extracting values for quantified variables in satisfiable integer problems. Beagle won an efficiency award at CADE Automated theorem prover System Competition (CASC)-J7, and won the arithmetic non-theorem category at CASC-25. This implementation is the start point for solving the ‘disproving with theories’ problem. Some hypotheses can be disproved by showing that, together with axioms the hypothesis is unsatisfiable. Often this is relative to other axioms that enrich a base theory by defining new functions. In that case, the disproof is contingent on the satisfiability of the enrichment. Satisfiability in this context is undecidable. Instead, general characterizations of definition formulas, which do not alter the satisfiability status of the main axioms, are given. These general criteria apply to recursive definitions, definitions over lists, and to arrays. This allows proving some non-theorems which are otherwise intractable, and justifies similar disproofs of non-linear arithmetic formulas. When the hypothesis is contingently true, disproof requires proving existence of a model. If the Superposition calculus saturates a clause set, then a model exists, but only when the clause set satisfies a completeness criterion. This requires each instance of an uninterpreted, theory-sorted term to have a definition in terms of theory symbols. The second contribution is a procedure that creates such definitions, given that a subset of quantifiers range over finite sets. Definitions are produced in a counter-example driven way via a sequence of over and under approximations to the clause set. Two descriptions of the method are given: the first uses the component solver modularly, but has an inefficient counter-example heuristic. The second is more general, correcting many of the inefficiencies of the first, yet it requires tracking clauses through a proof. This latter method is shown to apply also to lists and to problems with unbounded quantifiers. Together, these tools give new ways for applying successful first-order reasoning methods to problems involving interpreted theories
    • …
    corecore