452 research outputs found

    Towards Correctness of Program Transformations Through Unification and Critical Pair Computation

    Get PDF
    Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.Comment: In Proceedings UNIF 2010, arXiv:1012.455

    Algorithm for Adapting Cases Represented in a Tractable Description Logic

    Full text link
    Case-based reasoning (CBR) based on description logics (DLs) has gained a lot of attention lately. Adaptation is a basic task in the CBR inference that can be modeled as the knowledge base revision problem and solved in propositional logic. However, in DLs, it is still a challenge problem since existing revision operators only work well for strictly restricted DLs of the \emph{DL-Lite} family, and it is difficult to design a revision algorithm which is syntax-independent and fine-grained. In this paper, we present a new method for adaptation based on the DL EL\mathcal{EL_{\bot}}. Following the idea of adaptation as revision, we firstly extend the logical basis for describing cases from propositional logic to the DL EL\mathcal{EL_{\bot}}, and present a formalism for adaptation based on EL\mathcal{EL_{\bot}}. Then we present an adaptation algorithm for this formalism and demonstrate that our algorithm is syntax-independent and fine-grained. Our work provides a logical basis for adaptation in CBR systems where cases and domain knowledge are described by the tractable DL EL\mathcal{EL_{\bot}}.Comment: 21 pages. ICCBR 201

    Insulin Glargine in the Intensive Care Unit: A Model-Based Clinical Trial Design

    Get PDF
    Online 4 Oct 2012Introduction: Current succesful AGC (Accurate Glycemic Control) protocols require extra clinical effort and are impractical in less acute wards where patients are still susceptible to stress-induced hyperglycemia. Long-acting insulin Glargine has the potential to be used in a low effort controller. However, potential variability in efficacy and length of action, prevent direct in-hospital use in an AGC framework for less acute wards. Method: Clinically validated virtual trials based on data from stable ICU patients from the SPRINT cohort who would be transferred to such an approach are used to develop a 24-hour AGC protocol robust to different Glargine potencies (1.0x, 1.5x and 2.0x regular insulin) and initial dose sizes (dose = total insulin over prior 12, 18 and 24 hours). Glycemic control in this period is provided only by varying nutritional inputs. Performance is assessed as %BG in the 4.0-8.0mmol/L band and safety by %BG<4.0mmol/L. Results: The final protocol consisted of Glargine bolus size equal to insulin over the previous 18 hours. Compared to SPRINT there was a 6.9% - 9.5% absolute decrease in mild hypoglycemia (%BG<4.0mmol/L) and up to a 6.2% increase in %BG between 4.0 and 8.0mmol/L. When the efficacy is known (1.5x assumed) there were reductions of: 27% BG measurements, 59% insulin boluses, 67% nutrition changes, and 6.3% absolute in mild hypoglycemia. Conclusion: A robust 24-48 clinical trial has been designed to safely investigate the efficacy and kinetics of Glargine as a first step towards developing a Glargine-based protocol for less acute wards. Ensuring robustness to variability in Glargine efficacy significantly affects the performance and safety that can be obtained

    Backward Reachability of Array-based Systems by SMT solving: Termination and Invariant Synthesis

    Full text link
    The safety of infinite state systems can be checked by a backward reachability procedure. For certain classes of systems, it is possible to prove the termination of the procedure and hence conclude the decidability of the safety problem. Although backward reachability is property-directed, it can unnecessarily explore (large) portions of the state space of a system which are not required to verify the safety property under consideration. To avoid this, invariants can be used to dramatically prune the search space. Indeed, the problem is to guess such appropriate invariants. In this paper, we present a fully declarative and symbolic approach to the mechanization of backward reachability of infinite state systems manipulating arrays by Satisfiability Modulo Theories solving. Theories are used to specify the topology and the data manipulated by the system. We identify sufficient conditions on the theories to ensure the termination of backward reachability and we show the completeness of a method for invariant synthesis (obtained as the dual of backward reachability), again, under suitable hypotheses on the theories. We also present a pragmatic approach to interleave invariant synthesis and backward reachability so that a fix-point for the set of backward reachable states is more easily obtained. Finally, we discuss heuristics that allow us to derive an implementation of the techniques in the model checker MCMT, showing remarkable speed-ups on a significant set of safety problems extracted from a variety of sources.Comment: Accepted for publication in Logical Methods in Computer Scienc

    A logical study of local and global graded similarities

    Get PDF
    In this work we study the relationship between global and local similarities in the graded framework of fuzzy class theory (FCT), in which there already exists a graded notion of similarity. In FCT we can express the fact that a fuzzy relation is reflexive, symmetric, or transitive up to a certain degree, and similarity is defined as a first-order sentence, which is the fusion of three sentences corresponding to the graded notions of reflexivity, symmetry, and transitivity. This allows us to speak in a natural way of the degree of similarity of a relation. We consider global similarities defined from local similarities using t-norms as aggregation operators, and we obtain some results in the framework of FCT that, adequately interpreted, allow us to say that when we take a t-norm as an aggregation operator, the properties of reflexivity, symmetry, and transitivity of fuzzy binary relations are inherited from the local to the global level, and that the global similarity is a congruence if some of the local similarities are congruence

    Separation of Test-Free Propositional Dynamic Logics over Context-Free Languages

    Full text link
    For a class L of languages let PDL[L] be an extension of Propositional Dynamic Logic which allows programs to be in a language of L rather than just to be regular. If L contains a non-regular language, PDL[L] can express non-regular properties, in contrast to pure PDL. For regular, visibly pushdown and deterministic context-free languages, the separation of the respective PDLs can be proven by automata-theoretic techniques. However, these techniques introduce non-determinism on the automata side. As non-determinism is also the difference between DCFL and CFL, these techniques seem to be inappropriate to separate PDL[DCFL] from PDL[CFL]. Nevertheless, this separation is shown but for programs without test operators.Comment: In Proceedings GandALF 2011, arXiv:1106.081

    A Formalization of the Theorem of Existence of First-Order Most General Unifiers

    Full text link
    This work presents a formalization of the theorem of existence of most general unifiers in first-order signatures in the higher-order proof assistant PVS. The distinguishing feature of this formalization is that it remains close to the textbook proofs that are based on proving the correctness of the well-known Robinson's first-order unification algorithm. The formalization was applied inside a PVS development for term rewriting systems that provides a complete formalization of the Knuth-Bendix Critical Pair theorem, among other relevant theorems of the theory of rewriting. In addition, the formalization methodology has been proved of practical use in order to verify the correctness of unification algorithms in the style of the original Robinson's unification algorithm.Comment: In Proceedings LSFA 2011, arXiv:1203.542

    Quantifier-Free Interpolation of a Theory of Arrays

    Get PDF
    The use of interpolants in model checking is becoming an enabling technology to allow fast and robust verification of hardware and software. The application of encodings based on the theory of arrays, however, is limited by the impossibility of deriving quantifier- free interpolants in general. In this paper, we show that it is possible to obtain quantifier-free interpolants for a Skolemized version of the extensional theory of arrays. We prove this in two ways: (1) non-constructively, by using the model theoretic notion of amalgamation, which is known to be equivalent to admit quantifier-free interpolation for universal theories; and (2) constructively, by designing an interpolating procedure, based on solving equations between array updates. (Interestingly, rewriting techniques are used in the key steps of the solver and its proof of correctness.) To the best of our knowledge, this is the first successful attempt of computing quantifier- free interpolants for a variant of the theory of arrays with extensionality
    corecore