83 research outputs found

    Propagating Qualitative Values Through Quantitative Equations

    Get PDF
    In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model

    Transfer Function Synthesis without Quantifier Elimination

    Get PDF
    Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape

    On Differentiable Interpreters

    Get PDF
    Neural networks have transformed the fields of Machine Learning and Artificial Intelligence with the ability to model complex features and behaviours from raw data. They quickly became instrumental models, achieving numerous state-of-the-art performances across many tasks and domains. Yet the successes of these models often rely on large amounts of data. When data is scarce, resourceful ways of using background knowledge often help. However, though different types of background knowledge can be used to bias the model, it is not clear how one can use algorithmic knowledge to that extent. In this thesis, we present differentiable interpreters as an effective framework for utilising algorithmic background knowledge as architectural inductive biases of neural networks. By continuously approximating discrete elements of traditional program interpreters, we create differentiable interpreters that, due to the continuous nature of their execution, are amenable to optimisation with gradient descent methods. This enables us to write code mixed with parametric functions, where the code strongly biases the behaviour of the model while enabling the training of parameters and/or input representations from data. We investigate two such differentiable interpreters and their use cases in this thesis. First, we present a detailed construction of ∂4, a differentiable interpreter for the programming language FORTH. We demonstrate the ability of ∂4 to strongly bias neural models with incomplete programs of variable complexity while learning missing pieces of the program with parametrised neural networks. Such models can learn to solve tasks and strongly generalise to out-of-distribution data from small datasets. Second, we present greedy Neural Theorem Provers (gNTPs), a significant improvement of a differentiable Datalog interpreter NTP. gNTPs ameliorate the large computational cost of recursive differentiable interpretation, achieving drastic time and memory speedups while introducing soft reasoning over logic knowledge and natural language

    Formal proofs in real algebraic geometry: from ordered fields to quantifier elimination

    Get PDF
    This paper describes a formalization of discrete real closed fields in the Coq proof assistant. This abstract structure captures for instance the theory of real algebraic numbers, a decidable subset of real numbers with good algorithmic properties. The theory of real algebraic numbers and more generally of semi-algebraic varieties is at the core of a number of effective methods in real analysis, including decision procedures for non linear arithmetic or optimization methods for real valued functions. After defining an abstract structure of discrete real closed field and the elementary theory of real roots of polynomials, we describe the formalization of an algebraic proof of quantifier elimination based on pseudo-remainder sequences following the standard computer algebra literature on the topic. This formalization covers a large part of the theory which underlies the efficient algorithms implemented in practice in computer algebra. The success of this work paves the way for formal certification of these efficient methods.Comment: 40 pages, 4 figure

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 25th International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2022, which was held during April 4-6, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 23 regular papers presented in this volume were carefully reviewed and selected from 77 submissions. They deal with research on theories and methods to support the analysis, integration, synthesis, transformation, and verification of programs and software systems

    Specification and Model-driven Trace Checking of Complex Temporal Properties

    Get PDF
    Offline trace checking is a procedure used to evaluate requirement properties over a trace of recorded events. System properties verified in the context of trace checking can be specified using different specification languages and formalisms; in this thesis, we consider two classes of complex temporal properties: 1) properties defined using aggregation operators; 2) signal-based temporal properties from the Cyber Physical System (CPS) domain. The overall goal of this dissertation is to develop methods and tools for the specification and trace checking of the aforementioned classes of temporal properties, focusing on the development of scalable trace checking procedures for such properties. The main contributions of this thesis are: i) the TEMPSY-CHECK-AG model-driven approach for trace checking of temporal properties with aggregation operators, defined in the TemPsy-AG language; ii) a taxonomy covering the most common types of Signal-based Temporal Properties (SBTPs) in the CPS domain; iii) SB-TemPsy, a trace-checking approach for SBTPs that strikes a good balance in industrial contexts in terms of efficiency of the trace checking procedure and coverage of the most important types of properties in CPS domains. SB-TemPsy includes: 1) SB-TemPsy-DSL, a DSL that allows the specification of the types of SBTPs identified in the aforementioned taxonomy, and 2) an efficient trace-checking procedure, implemented in a prototype tool called SB-TemPsy-Check; iv) TD-SB-TemPsy-Report, a model-driven trace diagnostics approach for SBTPs expressed in SB-TemPsy-DSL. TD-SB-TemPsy-Report relies on a set of diagnostics patterns, i.e., undesired signal behaviors that might lead to property violations. To provide relevant and detailed information about the cause of a property violation, TD-SB-TemPsy-Report determines the diagnostics information specific to each type of diagnostics pattern. Our technological contributions rely on model-driven approaches for trace checking and trace diagnostics. Such approaches consist in reducing the problem of checking (respectively, determining the diagnostics information of) a property over an execution trace to the problem of evaluating an OCL (Object Constraint Language) constraint (semantically equivalent to ) on an instance (equivalent to ) of a meta-model of the trace. The results — in terms of efficiency of our model-driven tools—presented in this thesis are in line with those presented in previous work, and confirm that model-driven technologies can lead to the development of tools that exhibit good performance from a practical standpoint, also when applied in industrial contexts

    Stepwise functional refoundation of relational concept analysis

    Full text link
    Relational concept analysis (RCA) is an extension of formal concept analysis allowing to deal with several related contexts simultaneously. It has been designed for learning description logic theories from data and used within various applications. A puzzling observation about RCA is that it returns a single family of concept lattices although, when the data feature circular dependencies, other solutions may be considered acceptable. The semantics of RCA, provided in an operational way, does not shed light on this issue. In this report, we define these acceptable solutions as those families of concept lattices which belong to the space determined by the initial contexts (well-formed), cannot scale new attributes (saturated), and refer only to concepts of the family (self-supported). We adopt a functional view on the RCA process by defining the space of well-formed solutions and two functions on that space: one expansive and the other contractive. We show that the acceptable solutions are the common fixed points of both functions. This is achieved step-by-step by starting from a minimal version of RCA that considers only one single context defined on a space of contexts and a space of lattices. These spaces are then joined into a single space of context-lattice pairs, which is further extended to a space of indexed families of context-lattice pairs representing the objects manipComment: euzenat2023

    Koza a Prolog

    Get PDF
    Tato práce uvádí vztah umělé inteligence ke genetickému pro- gramování a některé vlastnisti logického programování. Hlavním cílem práce ovšem je naprogramovat algoritmus genetrického programování. Tento program operuje s logickými programy. Algoritmus je imple- mentován v SWI-Prologu. Práce obsahuje popis zdrojového kódu této implementace a výsledky jejího testování. Testování implementace nabízí několik možností budoucího rozšíření práce. 1This paper introduces the artificial intelligence background of ge- netic programming and some properties of logical programming para- digm. However, the main task of this work is to create the genetic programming algorithm that operates with the logical programming paradigm. SWI-Prolog is used for the actual implementation of such a program. This implementation is in detail described. The testing of this implementation shows some possible path for the future work. 1Department of LogicKatedra logikyFilozofická fakultaFaculty of Art
    corecore