415 research outputs found

    Transformation of logic programs: Foundations and techniques

    Get PDF
    AbstractWe present an overview of some techniques which have been proposed for the transformation of logic programs. We consider the so-called “rules + strategies” approach, and we address the following two issues: the correctness of some basic transformation rules w.r.t. a given semantics and the use of strategies for guiding the application of the rules and improving efficiency. We will also show through some examples the use and the power of the transformational approach, and we will briefly illustrate its relationship to other methodologies for program development

    Recursive Program Optimization Through Inductive Synthesis Proof Transformation

    Get PDF
    The research described in this paper involved developing transformation techniques which increase the efficiency of the noriginal program, the source, by transforming its synthesis proof into one, the target, which yields a computationally more efficient algorithm. We describe a working proof transformation system which, by exploiting the duality between mathematical induction and recursion, employs the novel strategy of optimizing recursive programs by transforming inductive proofs. We compare and contrast this approach with the more traditional approaches to program transformation, and highlight the benefits of proof transformation with regards to search, correctness, automatability and generality

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    Efficient Groundness Analysis in Prolog

    Get PDF
    Boolean functions can be used to express the groundness of, and trace grounding dependencies between, program variables in (constraint) logic programs. In this paper, a variety of issues pertaining to the efficient Prolog implementation of groundness analysis are investigated, focusing on the domain of definite Boolean functions, Def. The systematic design of the representation of an abstract domain is discussed in relation to its impact on the algorithmic complexity of the domain operations; the most frequently called operations should be the most lightweight. This methodology is applied to Def, resulting in a new representation, together with new algorithms for its domain operations utilising previously unexploited properties of Def -- for instance, quadratic-time entailment checking. The iteration strategy driving the analysis is also discussed and a simple, but very effective, optimisation of induced magic is described. The analysis can be implemented straightforwardly in Prolog and the use of a non-ground representation results in an efficient, scalable tool which does not require widening to be invoked, even on the largest benchmarks. An extensive experimental evaluation is givenComment: 31 pages To appear in Theory and Practice of Logic Programmin

    Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing

    Get PDF
    The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices

    Detecting Prolog programming techniques using abstract interpretation

    Get PDF
    There have been a number of attempts at developing intelligent tutoring systems (ITSs) for teaching students various programming languages. An important component of such an ITS is a debugger capable of recognizing errors in the code the student writes and possibly suggesting ways of correcting such errors. The debugging process involves a wealth of knowledge about the programming language, the student and the individual problem at hand, and an automated debugging component makes use of a number of tools which apply this knowledge. Successive ITSs have incorporated a wider range of knowledge and more powerful tools. The research described in this thesis should be seen as carrying on with this succes¬ sion. Specifically, we attempt to enhance an existing Prolog ITS (PITS) debugger called APR0P0S2 developed by Looi. The enhancements take the form of a richer language with which to describe Prolog code and more powerful tools with which constructs in this language may be detected in Prolog code. The richer language is based on the notion of programming techniques—common patterns in code which capture in some sense an expert's understanding of Prolog. The tools are based on Prolog abstract interpretation—a program analysis method for inferring dynamic properties of code. Our research makes contributions to both these areas. We develop a language for describing classes of Prolog programming techniques that manipulate data-structures. We define classes in this language for common Prolog techniques such as accumulator pairs and difference structures. We use abstract interpretation to infer the dynamic features with which techniques are described. We develop a general framework for abstract interpretation which is described in Prolog, so leading directly to an implementation. We develop two abstract domains—one which infers general data flow information about the code and one which infers particularly detailed type information—and describe the implementation of the former

    Language Interoperability and Logic Programming Languages

    Get PDF
    We discuss P#, our implementation of a tool which allows interoperation between a concurrent superset of the Prolog programming language and C#. This enables Prolog to be used as a native implementation language for Microsoft's .NET platform. P# compiles a linear logic extension of Prolog to C# source code. We can thus create C# objects from Prolog and use C#'s graphical, networking and other libraries. P# was developed from a modified port of the Prolog to Java translator, Prolog Cafe. We add language constructs on the Prolog side which allow concurrent Prolog code to be written. We add a primitive predicate which evaluates a Prolog structure on a newly forked thread. Communication between threads is based on the unification of variables contained in such a structure. It is also possible for threads to communicate through a globally accessible table. All of the new features are available to the programmer through new built-in Prolog predicates. We present three case studies. The first is an application which allows several users to modify a database. The users are able to disconnect from the database and to modify their own copies of the data before reconnecting. On reconnecting, conflicts must be resolved. The second is an object-oriented assistant, which allows the user to query the contents of a C# namespace or Java package. The third is a tool which allows a user to interact with a graphical display of the inheritance tree. Finally, we optimize P#'s runtime speed by translating some Prolog predicates into more idiomatic C# code than is produced by a naive port of Prolog Cafe. This is achieved by observing that semi-deterministic predicates (being those which always either fail or succeed with exactly one solution) that only call other semi-deterministic predicates enjoy relatively simple control flow. We make use of the fact that Prolog programs often contain predicates which operate as functions, and that such predicates are usually semi-deterministic

    Region-based memory management for Mercury programs

    Full text link
    Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24%, and an average reduction in memory requirements of 95%. In fact, our system achieves optimal memory consumption in some programs.Comment: 74 pages, 23 figures, 11 tables. A shorter version of this paper, without proofs, is to appear in the journal Theory and Practice of Logic Programming (TPLP

    First-Class Functions for First-Order Database Engines

    Full text link
    We describe Query Defunctionalization which enables off-the-shelf first-order database engines to process queries over first-class functions. Support for first-class functions is characterized by the ability to treat functions like regular data items that can be constructed at query runtime, passed to or returned from other (higher-order) functions, assigned to variables, and stored in persistent data structures. Query defunctionalization is a non-invasive approach that transforms such function-centric queries into the data-centric operations implemented by common query processors. Experiments with XQuery and PL/SQL database systems demonstrate that first-order database engines can faithfully and efficiently support the expressive "functions as data" paradigm.Comment: Proceedings of the 14th International Symposium on Database Programming Languages (DBPL 2013), August 30, 2013, Riva del Garda, Trento, Ital
    corecore