193 research outputs found

    A Narrowing-based Instantiation Rule for Rewriting-based Fold/Unfold Transformations

    Get PDF
    AbstractIn this paper we show how to transfer some developments done in the field of functionallogic programming (FLP) to a pure functional setting (FP). More exactly, we propose a complete fold/unfold based transformation system for optimizing lazy functional programs. Our main contribution is the definition of a safe instantiation rule which is used to enable effective unfolding steps based on rewriting. Since instantiation has been traditionally considered problematic in FP, we take advantage of previous experiences in the more general setting of FLP where instantiation is naturally embedded into an unfolding rule based on narrowing. Inspired by the so called needed narrowing strategy, our instantiation rule inherits the best properties of this refinement of narrowing. Our proposal optimizes previous approaches (that require more transformation effort) defined in the specialized literature of pure FP by anticipating bindings on unifiers used to instantiate a given program rule and by generating redexes at different positions on instantiated rules in order to enable subsequent unfolding steps. As a consequence, our correct/complete technique avoids redundant rules and preserves the natural structure of programs

    The Sigma-Semantics: A Comprehensive Semantics for Functional Programs

    Get PDF
    A comprehensive semantics for functional programs is presented, which generalizes the well-known call-by-value and call-by-name semantics. By permitting a separate choice between call-by value and call-by-name for every argument position of every function and parameterizing the semantics by this choice we abstract from the parameter-passing mechanism. Thus common and distinguishing features of all instances of the sigma-semantics, especially call-by-value and call-by-name semantics, are highlighted. Furthermore, a property can be validated for all instances of the sigma-semantics by a single proof. This is employed for proving the equivalence of the given denotational (fixed-point based) and two operational (reduction based) definitions of the sigma-semantics. We present and apply means for very simple proofs of equivalence with the denotational sigma-semantics for a large class of reduction-based sigma-semantics. Our basis are simple first-order constructor-based functional programs with patterns

    The Sigma-Semantics: A Comprehensive Semantics for Functional Programs

    Get PDF
    A comprehensive semantics for functional programs is presented, which generalizes the well-known call-by-value and call-by-name semantics. By permitting a separate choice between call-by value and call-by-name for every argument position of every function and parameterizing the semantics by this choice we abstract from the parameter-passing mechanism. Thus common and distinguishing features of all instances of the sigma-semantics, especially call-by-value and call-by-name semantics, are highlighted. Furthermore, a property can be validated for all instances of the sigma-semantics by a single proof. This is employed for proving the equivalence of the given denotational (fixed-point based) and two operational (reduction based) definitions of the sigma-semantics. We present and apply means for very simple proofs of equivalence with the denotational sigma-semantics for a large class of reduction-based sigma-semantics. Our basis are simple first-order constructor-based functional programs with patterns

    Term rewriting systems from Church-Rosser to Knuth-Bendix and beyond

    Get PDF
    Term rewriting systems are important for computability theory of abstract data types, for automatic theorem proving, and for the foundations of functional programming. In this short survey we present, starting from first principles, several of the basic notions and facts in the area of term rewriting. Our treatment, which often will be informal, covers abstract rewriting, Combinatory Logic, orthogonal systems, strategies, critical pair completion, and some extended rewriting formats

    Term rewriting systems

    Get PDF

    Data linkage algebra, data linkage dynamics, and priority rewriting

    Get PDF
    We introduce an algebra of data linkages. Data linkages are intended for modelling the states of computations in which dynamic data structures are involved. We present a simple model of computation in which states of computations are modelled as data linkages and state changes take place by means of certain actions. We describe the state changes and replies that result from performing those actions by means of a term rewriting system with rule priorities. The model in question is an upgrade of molecular dynamics. The upgrading is mainly concerned with the features to deal with values and the features to reclaim garbage.Comment: 48 pages, typos corrected, phrasing improved, definition of services replaced; presentation improved; presentation improved and appendix adde

    Interactive Model-Based Compilation: A Modeller-Driven Development Approach

    Get PDF
    There is a growing tendency for using domain-specific languages, which help domain experts to stay focussed on abstract problem solutions. It is important to carefully design these languages and tools, which fundamentally perform model-to-model transformations. The quality of both usually decides the effectiveness of the subsequent development and therefore the quality of the final applications. However, as the complexity and safety requirements of modern systems grow, it becomes increasingly burdensome to create highly customized languages and difficult to provide reasonable overviews within these tools. This thesis introduces a new interactive model-based compilation methodology. Compilations for arbitrary model-to-model transformations are themselves described as models. They can be instantiated for particular inputs, e. g. a program, to create concrete compilation runs, which return the result of that compilation. The compilation instance is interactively observable. Intermediate results serve as new inputs and as documentation. They can be used to create highly customized views and facilitate understandability. This methodology guides modellers from the start of the compilation to the final result so that they can interactively refine their models. The methodology has been implemented and validated as the KIELER Compiler (KiCo) and is available as part of the KIELER open-source project. It is used to implement the current reference compiler for the SCCharts language, a statecharts dialect designed for specifying safety-critical reactive systems based on a synchronous model of computation. The interactive model-based compilation approach was key to the rapid prototyping of three different compilation strategies, as well as new language extensions, variations and closely related languages. The results are verified with benchmarks, which are again modelled using the same approach and technology. The usability of the SCCharts language and the KiCo tooling is documented with long-term surveys and real-life industrial, academic and teaching examples

    Compiling a Functional Logic Language: The Fair Scheme

    Full text link
    Abstract. We present a compilation scheme for a functional logic programming language. The input program to our compiler is a constructor-based graph rewrit-ing system in a non-confluent, but well-behaved class. This input is an interme-diate representation of a functional logic program in a language such as Curry or T OY. The output program from our compiler consists of three procedures that make recursive calls and execute both rewrite and pull-tab steps. This output is an intermediate representation that is easy to encode in any number of programming languages. Our design evolves the Basic Scheme of Antoy and Peters by removing the “left bias ” that prevents obtaining results of some computations—a behavior related to the order of evaluation, which is counter to declarative programming. The benefits of this evolution are not only the strong completeness of computa-tions, but also the provability of non-trivial properties of these computations. We rigorously describe the compiler design and prove some of its properties. To state and prove these properties, we introduce novel definitions of “need ” and “fail-ure. ” For non-confluent constructor-based rewriting systems these concepts are more appropriate than the classic definition of need of Huet and Levy
    corecore