936 research outputs found

    Guided Unfoldings for Finding Loops in Standard Term Rewriting

    Full text link
    In this paper, we reconsider the unfolding-based technique that we have introduced previously for detecting loops in standard term rewriting. We improve it by guiding the unfolding process, using distinguished positions in the rewrite rules. This results in a depth-first computation of the unfoldings, whereas the original technique was breadth-first. We have implemented this new approach in our tool NTI and compared it to the previous one on a bunch of rewrite systems. The results we get are promising (better times, more successful proofs).Comment: Pre-proceedings paper presented at the 28th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2018), Frankfurt am Main, Germany, 4-6 September 2018 (arXiv:1808.03326

    Stream Productivity by Outermost Termination

    Full text link
    Streams are infinite sequences over a given data type. A stream specification is a set of equations intended to define a stream. A core property is productivity: unfolding the equations produces the intended stream in the limit. In this paper we show that productivity is equivalent to termination with respect to the balanced outermost strategy of a TRS obtained by adding an additional rule. For specifications not involving branching symbols balancedness is obtained for free, by which tools for proving outermost termination can be used to prove productivity fully automatically

    Towards Correctness of Program Transformations Through Unification and Critical Pair Computation

    Get PDF
    Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.Comment: In Proceedings UNIF 2010, arXiv:1012.455

    Algorithm for Adapting Cases Represented in a Tractable Description Logic

    Full text link
    Case-based reasoning (CBR) based on description logics (DLs) has gained a lot of attention lately. Adaptation is a basic task in the CBR inference that can be modeled as the knowledge base revision problem and solved in propositional logic. However, in DLs, it is still a challenge problem since existing revision operators only work well for strictly restricted DLs of the \emph{DL-Lite} family, and it is difficult to design a revision algorithm which is syntax-independent and fine-grained. In this paper, we present a new method for adaptation based on the DL EL\mathcal{EL_{\bot}}. Following the idea of adaptation as revision, we firstly extend the logical basis for describing cases from propositional logic to the DL EL\mathcal{EL_{\bot}}, and present a formalism for adaptation based on EL\mathcal{EL_{\bot}}. Then we present an adaptation algorithm for this formalism and demonstrate that our algorithm is syntax-independent and fine-grained. Our work provides a logical basis for adaptation in CBR systems where cases and domain knowledge are described by the tractable DL EL\mathcal{EL_{\bot}}.Comment: 21 pages. ICCBR 201

    Insulin Glargine in the Intensive Care Unit: A Model-Based Clinical Trial Design

    Get PDF
    Online 4 Oct 2012Introduction: Current succesful AGC (Accurate Glycemic Control) protocols require extra clinical effort and are impractical in less acute wards where patients are still susceptible to stress-induced hyperglycemia. Long-acting insulin Glargine has the potential to be used in a low effort controller. However, potential variability in efficacy and length of action, prevent direct in-hospital use in an AGC framework for less acute wards. Method: Clinically validated virtual trials based on data from stable ICU patients from the SPRINT cohort who would be transferred to such an approach are used to develop a 24-hour AGC protocol robust to different Glargine potencies (1.0x, 1.5x and 2.0x regular insulin) and initial dose sizes (dose = total insulin over prior 12, 18 and 24 hours). Glycemic control in this period is provided only by varying nutritional inputs. Performance is assessed as %BG in the 4.0-8.0mmol/L band and safety by %BG<4.0mmol/L. Results: The final protocol consisted of Glargine bolus size equal to insulin over the previous 18 hours. Compared to SPRINT there was a 6.9% - 9.5% absolute decrease in mild hypoglycemia (%BG<4.0mmol/L) and up to a 6.2% increase in %BG between 4.0 and 8.0mmol/L. When the efficacy is known (1.5x assumed) there were reductions of: 27% BG measurements, 59% insulin boluses, 67% nutrition changes, and 6.3% absolute in mild hypoglycemia. Conclusion: A robust 24-48 clinical trial has been designed to safely investigate the efficacy and kinetics of Glargine as a first step towards developing a Glargine-based protocol for less acute wards. Ensuring robustness to variability in Glargine efficacy significantly affects the performance and safety that can be obtained

    Effect of the Output of the System in Signal Detection

    Get PDF
    We analyze the consequences that the choice of the output of the system has in the efficiency of signal detection. It is shown that the signal and the signal-to-noise ratio (SNR), used to characterize the phenomenon of stochastic resonance, strongly depend on the form of the output. In particular, the SNR may be enhanced for an adequate output.Comment: 4 pages, RevTex, 6 PostScript figure

    Verifying Temporal Regular Properties of Abstractions of Term Rewriting Systems

    Get PDF
    The tree automaton completion is an algorithm used for proving safety properties of systems that can be modeled by a term rewriting system. This representation and verification technique works well for proving properties of infinite systems like cryptographic protocols or more recently on Java Bytecode programs. This algorithm computes a tree automaton which represents a (regular) over approximation of the set of reachable terms by rewriting initial terms. This approach is limited by the lack of information about rewriting relation between terms. Actually, terms in relation by rewriting are in the same equivalence class: there are recognized by the same state in the tree automaton. Our objective is to produce an automaton embedding an abstraction of the rewriting relation sufficient to prove temporal properties of the term rewriting system. We propose to extend the algorithm to produce an automaton having more equivalence classes to distinguish a term or a subterm from its successors w.r.t. rewriting. While ground transitions are used to recognize equivalence classes of terms, epsilon-transitions represent the rewriting relation between terms. From the completed automaton, it is possible to automatically build a Kripke structure abstracting the rewriting sequence. States of the Kripke structure are states of the tree automaton and the transition relation is given by the set of epsilon-transitions. States of the Kripke structure are labelled by the set of terms recognized using ground transitions. On this Kripke structure, we define the Regular Linear Temporal Logic (R-LTL) for expressing properties. Such properties can then be checked using standard model checking algorithms. The only difference between LTL and R-LTL is that predicates are replaced by regular sets of acceptable terms

    Nominal Unification of Higher Order Expressions with Recursive Let

    Get PDF
    A sound and complete algorithm for nominal unification of higher-order expressions with a recursive let is described, and shown to run in non-deterministic polynomial time. We also explore specializations like nominal letrec-matching for plain expressions and for DAGs and determine the complexity of corresponding unification problems.Comment: Pre-proceedings paper presented at the 26th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2016), Edinburgh, Scotland UK, 6-8 September 2016 (arXiv:1608.02534
    corecore