37 research outputs found

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Predicting SMT solver performance for software verification

    Get PDF
    The approach Why3 takes to interfacing with a wide variety of interactive and automatic theorem provers works well: it is designed to overcome limitations on what can be proved by a system which relies on a single tightly-integrated solver. In common with other systems, however, the degree to which proof obligations (or “goals”) are proved depends as much on the SMT solver as the properties of the goal itself. In this work, we present a method to use syntactic analysis to characterise goals and predict the most appropriate solver via machine-learning techniques. Combining solvers in this way - a portfolio-solving approach - maximises the number of goals which can be proved. The driver-based architecture of Why3 presents a unique opportunity to use a portfolio of SMT solvers for software verification. The intelligent scheduling of solvers minimises the time it takes to prove these goals by avoiding solvers which return Timeout and Unknown responses. We assess the suitability of a number of machinelearning algorithms for this scheduling task. The performance of our tool Where4 is evaluated on a dataset of proof obligations. We compare Where4 to a range of SMT solvers and theoretical scheduling strategies. We find that Where4 can out-perform individual solvers by proving a greater number of goals in a shorter average time. Furthermore, Where4 can integrate into a Why3 user’s normal workflow - simplifying and automating the non-expert use of SMT solvers for software verification

    Predicting SMT solver performance for software verification

    Get PDF
    The approach Why3 takes to interfacing with a wide variety of interactive and automatic theorem provers works well: it is designed to overcome limitations on what can be proved by a system which relies on a single tightly-integrated solver. In common with other systems, however, the degree to which proof obligations (or “goals”) are proved depends as much on the SMT solver as the properties of the goal itself. In this work, we present a method to use syntactic analysis to characterise goals and predict the most appropriate solver via machine-learning techniques. Combining solvers in this way - a portfolio-solving approach - maximises the number of goals which can be proved. The driver-based architecture of Why3 presents a unique opportunity to use a portfolio of SMT solvers for software verification. The intelligent scheduling of solvers minimises the time it takes to prove these goals by avoiding solvers which return Timeout and Unknown responses. We assess the suitability of a number of machinelearning algorithms for this scheduling task. The performance of our tool Where4 is evaluated on a dataset of proof obligations. We compare Where4 to a range of SMT solvers and theoretical scheduling strategies. We find that Where4 can out-perform individual solvers by proving a greater number of goals in a shorter average time. Furthermore, Where4 can integrate into a Why3 user’s normal workflow - simplifying and automating the non-expert use of SMT solvers for software verification

    Logical and deep learning methods for temporal reasoning

    Get PDF
    In this thesis, we study logical and deep learning methods for the temporal reasoning of reactive systems. In Part I, we determine decidability borders for the satisfiability and realizability problem of temporal hyperproperties. Temporal hyperproperties relate multiple computation traces to each other and are expressed in a temporal hyperlogic. In particular, we identify decidable fragments of the highly expressive hyperlogics HyperQPTL and HyperCTL*. As an application, we elaborate on an enforcement mechanism for temporal hyperproperties. We study explicit enforcement algorithms for specifications given as formulas in universally quantified HyperLTL. In Part II, we train a (deep) neural network on the trace generation and realizability problem of linear-time temporal logic (LTL). We consider a method to generate large amounts of additional training data from practical specification patterns. The training data is generated with classical solvers, which provide one of many possible solutions to each formula. We demonstrate that it is sufficient to train on those particular solutions such that the neural network generalizes to the semantics of the logic. The neural network can predict solutions even for formulas from benchmarks from the literature on which the classical solver timed out. Additionally, we show that it solves a significant portion of problems from the annual synthesis competition (SYNTCOMP) and even out-of-distribution examples from a recent case study.Diese Arbeit befasst sich mit logischen Methoden und mehrschichtigen Lernmethoden für das zeitabhängige Argumentieren über reaktive Systeme. In Teil I werden die Grenzen der Entscheidbarkeit des Erfüllbarkeits- und des Realisierbarkeitsproblem von temporalen Hypereigenschaften bestimmt. Temporale Hypereigenschaften setzen mehrere Berechnungsspuren zueinander in Beziehung und werden in einer temporalen Hyperlogik ausgedrückt. Insbesondere werden entscheidbare Fragmente der hochexpressiven Hyperlogiken HyperQPTL und HyperCTL* identifiziert. Als Anwendung wird ein Enforcement-Mechanismus für temporale Hypereigenschaften erarbeitet. Explizite Enforcement-Algorithmen für Spezifikationen, die als Formeln in universell quantifiziertem HyperLTL angegeben werden, werden untersucht. In Teil II wird ein (mehrschichtiges) neuronales Netz auf den Problemen der Spurgenerierung und Realisierbarkeit von Linear-zeit Temporallogik (LTL) trainiert. Es wird eine Methode betrachtet, um aus praktischen Spezifikationsmustern große Mengen zusätzlicher Trainingsdaten zu generieren. Die Trainingsdaten werden mit klassischen Solvern generiert, die zu jeder Formel nur eine von vielen möglichen Lösungen liefern. Es wird gezeigt, dass es ausreichend ist, an diesen speziellen Lösungen zu trainieren, sodass das neuronale Netz zur Semantik der Logik generalisiert. Das neuronale Netz kann Lösungen sogar für Formeln aus Benchmarks aus der Literatur vorhersagen, bei denen der klassische Solver eine Zeitüberschreitung hatte. Zusätzlich wird gezeigt, dass das neuronale Netz einen erheblichen Teil der Probleme aus dem jährlichen Synthesewettbewerb (SYNTCOMP) und sogar Beispiele außerhalb der Distribution aus einer aktuellen Fallstudie lösen kann

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers
    corecore