28 research outputs found

    Defining the meaning of TPTP formatted proofs

    Get PDF
    International audienceThe TPTP library is one of the leading problem libraries in the automated theorem proving community. Over time, support was added for problems beyond those in first-order clausal form. TPTP has also been augmented with support for various proof formats output by theorem provers. Such proofs can also be maintained in the TSTP proof library. In this paper we propose an extension of this framework to support the semantic specification of the inference rules used in proofs

    Towards the automated modelling and formal verification of analog designs

    Get PDF
    The verification of analog circuits remains a very time consuming and expensive part of the design process. Complete simulation of the state space is not possible; a line is drawn by the designer when it is deemed that enough sets of inputs and outputs have been covered and therefore the circuit is "verified". Unfortunately, bugs could still exist and for safety critical applications this is not acceptable. As well, a bug in the design could lead to costly recalls and a loss of revenue. Formal methods, which use mathematical logic to prove correctness of a design have been developed. However, available techniques for the formal verification of analog circuits are plagued by inaccuracies and a high level of user effort and interaction. We propose in this thesis a complete methodology for the modelling and formal verification of analog circuits. Bond graphs, which are based on the flow of power, are used to automatically extract the circuit's system of Ordinary Differential Equations. Subsequently, two formal verification methods, one based on automated theorem proving with MetiTarski, the other on predicate abstraction based model checking with HybridSal, are then used to verify functional properties on the extracted models. The methodology proposed is mechanical in nature and can be made completely automated. We apply this modelling and verification methodology on a set of analog designs that exhibit complex non-linear behaviour

    A global workspace framework for combined reasoning

    No full text
    Artificial Intelligence research has produced many effective techniques for solving a wide range of problems. Practitioners tend to concentrate their efforts in one particular problem solving paradigm and, in the main, AI research describes new methods for solving particular types of problems or improvements in existing approaches. By contrast, much less research has considered how to fruitfully combine different problem solving techniques. Numerous studies have demonstrated how a combination of reasoning approaches can improve the effectiveness of one of those methods. Others have demonstrated how, by using several different reasoning techniques, a system or method can be developed to accomplish a novel task, that none of the individual techniques could perform. Combined reasoning systems, i.e., systems which apply disparate reasoning techniques in concert, can be more than the sum of their parts. In addition, they gain leverage from advances in the individual methods they encompass. However, the benefits of combined reasoning systems are not easily accessible, and systems have been hand-crafted to very specific tasks in certain domains. This approach means those systems often suffer from a lack of clarity of design and are inflexible to extension. In order for the field of combined reasoning to advance, we need to determine best practice and identify effective general approaches. By developing useful frameworks, we can empower researchers to explore the potential of combined reasoning, and AI in general. We present here a framework for developing combined reasoning systems, based upon Baars’ Global Workspace Theory. The architecture describes a collection of processes, embodying individual reasoning techniques, which communicate via a global workspace. We present, also, a software toolkit which allows users to implement systems according to the framework. We describe how, despite the restrictions of the framework, we have used it to create systems to perform a number of combined reasoning tasks. As well as being as effective as previous implementations, the simplicity of the underlying framework means they are structured in a straightforward and comprehensible manner. It also makes the systems easy to extend to new capabilities, which we demonstrate in a number of case studies. Furthermore, the framework and toolkit we describe allow developers to harness the parallel nature of the underlying theory by enabling them to readily convert their implementations into distributed systems. We have experimented with the framework in a number of application domains and, through these applications, we have contributed to constraint satisfaction problem solving and automated theory formation

    Exploring the IC3 Algorithm to Improve the Siemens-Swansea Ladder Logic Verification Tool

    Get PDF
    Programming logic controllers (PLCs) [16] are widely used to control processes, as can be found in home appliances such as dishwashers and washing machines or in industrial applications where they control components in a production line or railway interlockings [10]. PLCs are often used to control safety critical systems. For instance, malfunctioning dishwashers and washing machines can flood kitchens and homes, malfunctioning robotic arm may lead to human injury or damage to the product, a malfunction of an interlocking computer could end in trains colliding. Therefore, there is a practical demand to verify PLCs as safe, more specifically, that their programs will remain in a set states which one considers to be safe. In the context of railway control systems, Siemens Mobility has been working on verifying the control programs of interlocking computers [51]. These programs are written in Ladder Logic, one of the three specialised programming languages for PLCs that are introduced in the IEC standard 61131 [21]. Siemens Mobility has been working alongside the Swansea Railway Verification Group verifying Ladder Logic programs for these interlocking computers and developed the Ladder Logic Verifier [87, 51] based on the Inductive Verification and Bounded Model Checking techniques. The issue with Inductive Verification and, therefore, the Ladder Logic Verifier, is that it suffers from possibly returning False Positives [51] due to a method-inherent over-approximation of the state space. This leads to an ambiguity should the verification process return that a safety property is not fulfilled: this can either be due to a False Positive or due to a mistake in the program to be analysed. Experience in verification practice suggests that this is the case for 30% to 40% out of around 240 groups of Abstract Safety Properties to be checked by Siemens. An Invariant can hopefully be used to prevent these over-approximation False Positives and allow the verification to pass, although they are labour intensive to find manually. This can be solved by applying the IC3 Algorithm [43, 44, 34]. In its current form the algorithm has been successful for the test bed of small Ladder Logic programs. However, it was proven to not scale well when tested on a Siemens industrial interlocking. This thesis presents developments in IC3 implementations that improve its efficiency so it can discover the Invariants and allow the industrial railway interlockings to pass under the Ladder Logic Verifier

    Inductive analysis of security protocols in Isabelle/HOL with applications to electronic voting

    Get PDF
    Security protocols are predefined sequences of message exchanges. Their uses over computer networks aim to provide certain guarantees to protocol participants. The sensitive nature of many applications resting on protocols encourages the use of formal methods to provide rigorous correctness proofs. This dissertation presents extensions to the Inductive Method for protocol verification in the Isabelle/HOL interactive theorem prover. The current state of the Inductive Method and of other protocol analysis techniques are reviewed. Protocol composition modelling in the Inductive Method is introduced and put in practice by holistically verifying the composition of a certification protocol with an authentication protocol. Unlike some existing approaches, we are not constrained by independence requirements or search space limitations. A special kind of identity-based signatures, auditable ones, are specified in the Inductive Method and integrated in an analysis of a recent ISO/IEC 9798-3 protocol. A side-by-side verification features both a version of the protocol with auditable identity-based signatures and a version with plain ones. The largest part of the thesis presents extensions for the verification of electronic voting protocols. Innovative specification and verification strategies are described. The crucial property of voter privacy, being the impossibility of knowing how a specific voter voted, is modelled as an unlinkability property between pieces of information. Unlinkability is then specified in the Inductive Method using novel message operators. An electronic voting protocol by Fujioka, Okamoto and Ohta is modelled in the Inductive Method. Its classic confidentiality properties are verified, followed by voter privacy. The approach is shown to be generic enough to be re-usable on other protocols while maintaining a coherent line of reasoning. We compare our work with the widespread process equivalence model and examine respective strengths

    Efficient Machine Learning Methods for Document Image Analysis

    Get PDF
    With the exponential growth in volume of multimedia content on the internet, there has been an increasing interest for developing more efficient and scalable algorithms to learn directly from data without excessive restrictions on nature of the content. In the context of document images, many large scale digitization projects have called for reliable and scalable triage methods for enhancement, segmentation, grouping and categorization of captured images. Current approaches, however, are typically limited to a specific class of documents such as scanned books, newspapers, journal articles or forms for example, and analysis and processing of more unconstrained and noisy heterogeneous document collections has not been as widely addressed. Additionally, existing machine-learning based approaches for document processing need to be carefully applied to handle the challenges associated with large and imbalanced training data. In this thesis, we address these challenges in three primary applications of document image analysis - low-level document enhancement, mid-level handwritten line segmentation, and high-level classification and retrieval. We first present a data selection method for training Support Vector Machines (SVM) on large-scale data sets. We apply the proposed approach to pixel-level document image enhancement, and show promising results with a relatively small number of training samples. Second, we present a graph-based method for segmentation of handwritten document images into text-lines which is more efficient and adaptive than previous approaches. Our approach demonstrates that combining results from local and global methods enhances the final performance of text-line segmentation. Third, we present an approach to compute structural similarities between images for classification and retrieval. Results on real-world data sets show that the approach is more effective than earlier approaches when the labeled data is limited. We extend our classification approach to a completely unsupervised setting, where both the number of classes and representative samples from each class is assumed to be unknown. We present a method for computing similarities based on learned structural patterns and correlations from the given data. Experiments with four different data sets show that our approach can estimate number of classes in large document collections and group structurally similar images with a high-accuracy

    Proof-checking mathematical texts in controlled natural language

    Get PDF
    The research conducted for this thesis has been guided by the vision of a computer program that could check the correctness of mathematical proofs written in the language found in mathematical textbooks. Given that reliable processing of unrestricted natural language input is out of the reach of current technology, we focused on the attainable goal of using a controlled natural language (a subset of a natural language defined through a formal grammar) as input language to such a program. We have developed a prototype of such a computer program, the Naproche system. This thesis is centered around the novel logical and linguistic theory needed for defining and motivating the controlled natural language and the proof checking algorithm of the Naproche system. This theory provides means for bridging the wide gap between natural and formal mathematical proofs. We explain how our system makes use of and extends existing linguistic formalisms in order to analyse the peculiarities of the language of mathematics. In this regard, we describe a phenomenon of this language previously not described by other logicians or linguists, the implicit dynamic function introduction, exemplified by constructs of the form "for every x there is an f(x) such that ...". We show how this function introduction can lead to a paradox analogous to Russell's paradox. To tackle this problem, we developed a novel foundational theory of functions called Ackermann-like Function Theory, which is equiconsistent to ZFC (Zermelo-Fraenkel set theory with the Axiom of Choice) and can be used for imposing limitations to implicit dynamic function introduction in order to avoid this paradox. We give a formal account of implicit dynamic function introduction by extending Dynamic Predicate Logic, a formalism developed by linguists to account for the dynamic nature of natural language quantification, to a novel formalism called Higher-Order Dynamic Predicate Logic, whose semantics is based on Ackermann-like Function Theory. Higher-Order Dynamic Predicate Logic also includes a formal account of the linguistic theory of presuppositions, which we use for clarifying and formally modelling the usage of potentially undefined terms (e.g. 1/x, which is undefined for x=0) and of definite descriptions (e.g. "the even prime number") in the language of mathematics. The semantics of the controlled natural language is defined through a translation from the controlled natural language into an extension of Higher-Order Dynamic Predicate Logic called Proof Text Logic. Proof Text Logic extends Higher-Order Dynamic Predicate Logic in two respects, which make it suitable for representing the content of mathematical texts: It contains features for representing complete texts rather than single assertions, and instead of being based on Ackermann-like Function Theory, it is based on a richer foundational theory called Class-Map-Tuple-Number Theory, which does not only have maps/functions, but also classes/sets, tuples, numbers and Booleans as primitives. The proof checking algorithm checks the deductive correctness of proof texts written in the controlled natural language of the Naproche system. Since the semantics of the controlled natural language is defined through a translation into the Proof Text Logic formalism, the proof checking algorithm is defined on Proof Text Logic input. The algorithm makes use of automated theorem provers for checking the correctness of single proof steps. In this way, the proof steps in the input text do not need to be as fine-grained as in formal proof calculi, but may contain several reasoning steps at once, just as is usual in natural mathematical texts. The proof checking algorithm has to recognize implicit dynamic function introductions in the input text and has to take care of presuppositions of mathematical statements according to the principles of the formal account of presuppositions mentioned above. We prove two soundness and two completeness theorems for the proof checking algorithm: In each case one theorem compares the algorithm to the semantics of Proof Text Logic and one theorem compares it to the semantics of standard first-order predicate logic. As a case study for the theory developed in the thesis, we illustrate the working of the Naproche system on a controlled natural language adaptation of the beginning of Edmund Landau's Grundlagen der Analysis.Beweisprüfung mathematischer Texte in kontrollierter natürlicher Sprache Die Forschung, die für diese Dissertation durchgeführt wurde, basiert auf der Vision eines Computerprogramms, das die Korrektheit von mathematischen Beweisen, die in der gewöhnlichen mathematischen Fachsprache verfasst sind, überprüfen kann. Da die zuverlässige automatische Bearbeitung von uneingeschränktem natürlich-sprachlichen Input außer Reichweite der gegenwärtigen Technologie ist, haben wir uns auf das erreichbare Ziel fokussiert, eine kontrollierte natürliche Sprache (eine Teilmenge der natürlichen Sprache, die durch eine formale Grammatik definiert ist) als Eingabesprache für ein solches Programm zu verwenden. Wir haben einen Prototypen eines solchen Computerprogramms, das Naproche-System, entwickelt. Die vorliegende Dissertation beschreibt die neuartigen logischen und linguistischen Theorien, die benötigt werden, um die kontrollierte natürliche Sprache und den Beweisprüfungs-Algorithmus des Naproche-Systems zu definieren und zu motivieren. Diese Theorien stellen Methoden zu Verfügung, die dazu verwendet werden können, die weite Kluft zwischen natürlichen und formalen mathematischen Beweisen zu überbrücken. Wir erklären, wie unser System existierende linguistische Formalismen verwendet und erweitert, um die Besonderheiten der mathematischen Fachsprache zu analysieren. In diesem Zusammenhang beschreiben wir ein Phänomen dieser Fachsprache, das bisher von Logikern und Linguisten nicht beschrieben wurde – die implizite dynamische Funktionseinführung, die durch Konstruktionen der vorm "für jedes x gibt es ein f(x), so dass ..." veranschaulicht werden kann. Wir zeigen, wie diese Funktionseinführung zu einer der Russellschen analogen Antinomie führt. Um dieses Problem zu lösen, haben wir eine neuartige Grundlagentheorie für Funktionen entwickelt, die Ackermann-artige Funktionstheorie, die äquikonsistent zu ZFC (Zermelo-Fraenkel-Mengenlehre mit Auswahlaxiom) ist und verwendet werden kann, um der impliziten dynamischen Funktionseinführung Grenzen zu setzen, die zur Vermeidung dieser Antinomie führen. Wir beschreiben die implizite dynamische Funktionseinführung formal, indem wir die Dynamische Prädikatenlogik – ein Formalismus, der von Linguisten entwickelt wurde, um die dynamischen Eigenschaften der natürlich-sprachlichen Quantifizierung zu erfassen – zur Dynamischen Prädikatenlogik Höherer Stufe erweitern, deren Semantik auf der Ackermann-artigen Funktionstheorie basiert. Die Dynamische Prädikatenlogik Höherer Stufe formalisiert auch die linguistische Theorie der Präsuppositionen, die wir verwenden, um den Gebrauch potentiell undefinierter Terme (z.B. der Term 1/x, der für x=0 undefiniert ist) und bestimmter Kennzeichnungen (z.B. "die gerade Primzahl") in der mathematischen Fachsprache zu modellieren. Die Semantik der kontrollierten natürlichen Sprache wird definiert durch eine Übersetzung dieser in eine Erweiterung der Dynamischen Prädikatenlogik Höherer Stufe mit der Bezeichnung Beweistext-Logik. Die Beweistext-Logik erweitert die Dynamische Prädikatenlogik Höherer Stufe in zwei Hinsichten: Sie stellt Funktionalitäten für die Repräsentation von vollständigen Texten, und nicht nur von Einzelaussagen, zur Verfügung, und anstatt auf der Ackermann-artigen Funktionstheorie zu basieren, basiert sie auf einer reichhaltigeren Grundlagentheorie – der Klassen-Abbildungs-Tupel-Zahlen-Theorie, die neben Abbildungen/Funktionen auch noch Klassen/Mengen, Tupel, Zahlen und boolesche Werte als Grundobjekte zur Verfügung stellt. Der Beweisprüfungs-Algorithmus prüft die deduktive Korrektheit von Beweistexten, die in der kontrollierten natürlichen Sprache des Naproche-Systems verfasst sind. Da die Semantik dieser kontrollierten natürlichen Sprache durch eine Übersetzung in die Beweistext-Logik definiert ist, ist der Beweisprüfungs-Algorithmus für Beweistext-Logik-Input definiert. Der Algorithmus verwendet automatische Beweiser für die Überprüfung einzelner Beweisschritte. Dadurch müssen die Beweisschritte in dem Eingabetext nicht so kleinschrittig sein wie in formalen Beweiskalkülen, sondern können mehrere Deduktionsschritte zu einem Schritt vereinen, so wie dies auch in natürlichen mathematischen Texten üblich ist. Der Beweisprüfungs-Algorithmus muss die impliziten Funktionseinführungen im Eingabetext erkennen und Präsuppositionen von mathematischen Aussagen auf Grundlage der oben erwähnten Präsuppositionstheorie behandeln. Wir beweisen zwei Korrektheits- und zwei Vollständigkeitssätze für den Beweisprüfungs-Algorithmus: Jeweils einer dieser Sätze vergleicht den Algorithmus mit der Semantik der Beweistext-Logik und jeweils einer mit der Semantik der üblichen Prädikatenlogik erster Stufe. Als Fallstudie für die in dieser Dissertation entwickelte Theorie veranschaulichen wir die Funktionsweise des Naproche-Systems an einem an die kontrollierte natürliche Sprache angepassten Anfangsabschnitt von Edmund Landaus Grundlagen der Analysis

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    corecore