4,030 research outputs found

    A Unification Algorithm for GP 2

    Get PDF
    The graph programming language GP 2 allows to apply sets of ruleschemata (or “attributed” rules) non-deterministically. To analyse conflicts of pro-grams statically, graphs labelled with expressisons are overlayed to construct criticalpairs of rule applications. Each overlay induces a system of equations whose solu-tions represent different conflicts. We present a rule-based unification algorithm forGP expressions that is terminating, sound and complete. For every input equation,the algorithm generates a finite set of substitutions. Soundness means that each ofthese substitutions solves the input equation. Since GP labels are lists constructed byconcatenation, unification modulo associativity and unit law is required. This prob-lem, which is also known as word unification, is infinitary in general but becomesfinitary due to GP’s rule schema syntax and the assumption that rule schemata areleft-linear. Our unification algorithm is complete in that every solution of an inputequation is an instance of some substitution in the generated set

    Region-based memory management for Mercury programs

    Full text link
    Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24%, and an average reduction in memory requirements of 95%. In fact, our system achieves optimal memory consumption in some programs.Comment: 74 pages, 23 figures, 11 tables. A shorter version of this paper, without proofs, is to appear in the journal Theory and Practice of Logic Programming (TPLP

    Polymonadic Programming

    Full text link
    Monads are a popular tool for the working functional programmer to structure effectful computations. This paper presents polymonads, a generalization of monads. Polymonads give the familiar monadic bind the more general type forall a,b. L a -> (a -> M b) -> N b, to compose computations with three different kinds of effects, rather than just one. Polymonads subsume monads and parameterized monads, and can express other constructions, including precise type-and-effect systems and information flow tracking; more generally, polymonads correspond to Tate's productoid semantic model. We show how to equip a core language (called lambda-PM) with syntactic support for programming with polymonads. Type inference and elaboration in lambda-PM allows programmers to write polymonadic code directly in an ML-like syntax--our algorithms compute principal types and produce elaborated programs wherein the binds appear explicitly. Furthermore, we prove that the elaboration is coherent: no matter which (type-correct) binds are chosen, the elaborated program's semantics will be the same. Pleasingly, the inferred types are easy to read: the polymonad laws justify (sometimes dramatic) simplifications, but with no effect on a type's generality.Comment: In Proceedings MSFP 2014, arXiv:1406.153

    Confluence Analysis for a Graph Programming Language

    Get PDF
    GP 2 is a high-level domain-specific language for programming with graphs. Users write a set of graph transformation rules and organise them with imperative-style control constructs to perform a desired computation on an input graph. As rule selection and matching are non-deterministic, there might be different graphs resulting from program execution. Confluence is a property that establishes the global determinism of a computation despite possible local non-determinism. Conventional confluence analysis is done via so-called critical pairs, which are conflicts in minimal context. A key challenge is extending critical pairs to the setting of GP 2. This thesis concerns the development of confluence analysis for GP 2. First, we extend the notion of conflict to GP 2 rules, and prove that non-conflicting rule applications commute. Second, we define symbolic critical pairs and establish their properties, namely that there are only finitely many of them and that they represent all possible conflicts. We give an effective procedure for their construction. Third, we solve the problem of unifying GP 2 list expressions, which arises during the construction of critical pairs, by giving a unification procedure which terminates with a finite and complete set of unifiers (under certain restrictions). Last but not least, we specify a confluence analysis algorithm based on symbolic critical pairs, and show its soundness by proving the Local Confluence Theorem. Several existing programs are analysed for confluence to demonstrate how the analysis handles several GP 2 features at the same time, and to demonstrate the merit of the used techniques

    A General, Sound and Efficient Natural Language Parsing Algorithm based on Syntactic Constraints Propagation

    Get PDF
    This paper presents a new context-free parsing algorithm based on a bidirectional strictly horizontal strategy which incorporates strong top–down predictions (deriva- tions and adjacencies). From a functional point of view, the parser is able to propagate syntactic constraints reducing parsing ambiguity. From a computational perspective, the algorithm includes different techniques aimed at the improvement of the manipu- lation and representation of the structures used

    Unifying Nominal Unification

    Get PDF
    Nominal unification is proven to be quadratic in time and space. It was so by two different approaches, both inspired by the Paterson-Wegman linear unification algorithm, but dramatically different in the way nominal and first-order constraints are dealt with. To handle nominal constraints, Levy and Villaret introduced the notion of replacing while Calves and Fernandez use permutations and sets of atoms. To deal with structural constraints, the former use multi-equations in a way similar to the Martelli-Montanari algorithm while the later mimic Paterson-Wegman. In this paper we abstract over these two approaches and genralize them into the notion of modality, highlighting the general ideas behind nominal unification. We show that replacings and environments are in fact isomorphic. This isomorphism is of prime importance to prove intricate properties on both sides and a step further to the real complexity of nominal unification
    • …
    corecore