42 research outputs found

    Expression Refinement

    Get PDF
    This thesis presents a refinement calculus for expressions. The aim of refinement calculi is to make programming a mathematical activity, and thereby improve the correctness of programs. To achieve this, a refinement calculus provides a formal language and a set of rules that allow transformations of the language terms. Using a refinement calculus, to produce a correct program, the programmer writes a possibly non-algorithmic or inefficient term that nevertheless obviously describes the intended program. This term is the specification, and it is transformed into an efficient program by syntactic transformation, using the rules of the refinement calculus. This transformation is refinement

    Deconstructing Datalog

    Get PDF
    The deductive query language Datalog has found a wide array of uses, including static analy- sis (Smaragdakis and Bravenboer, 2010), business analytics (Aref et al., 2015), and distributed programming (Alvaro et al., 2010, 2011). Datalog is high-level and declarative, but simple and well-studied enough to admit efficient implementation strategies. For example, Whaley et al. found they could replace a hand-tuned C implementation of context-sensitive pointer analysis with a comparably-performing Datalog program that was 100x smaller (Whaley and Lam, 2004; Whaley et al., 2005). However, Datalog’s semantics are not stable under extensions. For instance, adding arithmetic operations breaks Datalog’s termination guarantee. Despite this, nearly all practical implementations extend Datalog beyond its theoretical core to add niceties such as arithmetic, datatypes, aggregations, and so on. Moreover, pure Datalog cannot abstract over repeated code: one may express a static analysis over a particular program, but to express the same analysis over multiple programs, one must duplicate the analysis code for each program analyzed. This thesis deconstructs Datalog from a categorical and type theoretic perspective to determine what makes it tick. Datalog’s semantic guarantees are provided by brute syntactic restrictions, such as stratification and the absence of function symbols. In place of these, we find compositional semantic properties such as monotonicity, which we capture using types. We show that this permits integrating Datalog’s features with those of typed functional languages, such as algebraic data types and higher order functions. In particular, this thesis makes the following contributions: 1. We define and expound the semantics and metatheory of Datafun, a pure and total higher-order typed functional language capturing the essence of Datalog. Where Data- log has predicates defined by a restricted class of Horn clauses, Datafun has finite sets and set comprehensions; Datalog’s bottom-up recursive queries become iterative fixed points; and Datalog’s stratification condition becomes a matter of tracking monotonicity with types. 2. We show how to generalize seminaïve evaluation to handle higher-order functions. Seminaïve evaluation is a technique from the Datalog literature which improves the performance of Datalog’s most distinctive feature: recursive queries. These are com- puted iteratively, and under a naïve evaluation strategy, each iteration recomputes all previous values. Seminaïve evaluation computes a safe approximation of the difference between iterations. This can asymptotically improve the performance of Datalog queries. Seminaïve evaluation is defined partly as a program transformation and partly as a modified iteration strategy, and takes advantage of the first-order nature of Datalog. We extend this transformation to handle higher-order programs written in Datafun. 3. In the process of generalizing seminaïve evaluation, we uncover a theory of incremental, monotone, higher-order computation, in which values change over time by growing larger, and programs respond incrementally to these increases

    ONTIC: A Knowledge Representation System for Mathematics

    Get PDF
    Ontic is an interactive system for developing and verifying mathematics. Ontic's verification mechanism is capable of automatically finding and applying information from a library containing hundreds of mathematical facts. Starting with only the axioms of Zermelo-Fraenkel set theory, the Ontic system has been used to build a data base of definitions and lemmas leading to a proof of the Stone representation theorem for Boolean lattices. The Ontic system has been used to explore issues in knowledge representation, automated deduction, and the automatic use of large data bases

    Proceedings of the Workshop on the lambda-Prolog Programming Language

    Get PDF
    The expressiveness of logic programs can be greatly increased over first-order Horn clauses through a stronger emphasis on logical connectives and by admitting various forms of higher-order quantification. The logic of hereditary Harrop formulas and the notion of uniform proof have been developed to provide a foundation for more expressive logic programming languages. The λ-Prolog language is actively being developed on top of these foundational considerations. The rich logical foundations of λ-Prolog provides it with declarative approaches to modular programming, hypothetical reasoning, higher-order programming, polymorphic typing, and meta-programming. These aspects of λ-Prolog have made it valuable as a higher-level language for the specification and implementation of programs in numerous areas, including natural language, automated reasoning, program transformation, and databases

    Stream Processing using Grammars and Regular Expressions

    Full text link
    In this dissertation we study regular expression based parsing and the use of grammatical specifications for the synthesis of fast, streaming string-processing programs. In the first part we develop two linear-time algorithms for regular expression based parsing with Perl-style greedy disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle and action machines, and a finite-state specialization of the streaming parsing algorithm presented in the first part. In the second part we also develop a new linear-time streaming parsing algorithm for parsing expression grammars (PEG) which generalizes the regular grammars of Kleenex. The algorithm is based on a bottom-up tabulation algorithm reformulated using least fixed points and evaluated using an instance of the chaotic iteration scheme by Cousot and Cousot
    corecore