1,346 research outputs found

    Towards Correctness of Program Transformations Through Unification and Critical Pair Computation

    Get PDF
    Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.Comment: In Proceedings UNIF 2010, arXiv:1012.455

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    Nominal Unification of Higher Order Expressions with Recursive Let

    Get PDF
    A sound and complete algorithm for nominal unification of higher-order expressions with a recursive let is described, and shown to run in non-deterministic polynomial time. We also explore specializations like nominal letrec-matching for plain expressions and for DAGs and determine the complexity of corresponding unification problems.Comment: Pre-proceedings paper presented at the 26th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2016), Edinburgh, Scotland UK, 6-8 September 2016 (arXiv:1608.02534

    Koka: Programming with Row Polymorphic Effect Types

    Full text link
    We propose a programming model where effects are treated in a disciplined way, and where the potential side-effects of a function are apparent in its type signature. The type and effect of expressions can also be inferred automatically, and we describe a polymorphic type inference system based on Hindley-Milner style inference. A novel feature is that we support polymorphic effects through row-polymorphism using duplicate labels. Moreover, we show that our effects are not just syntactic labels but have a deep semantic connection to the program. For example, if an expression can be typed without an exn effect, then it will never throw an unhandled exception. Similar to Haskell's `runST` we show how we can safely encapsulate stateful operations. Through the state effect, we can also safely combine state with let-polymorphism without needing either imperative type variables or a syntactic value restriction. Finally, our system is implemented fully in a new language called Koka and has been used successfully on various small to medium-sized sample programs ranging from a Markdown processor to a tier-splitted chat application. You can try out Koka live at www.rise4fun.com/koka/tutorial.Comment: In Proceedings MSFP 2014, arXiv:1406.153

    Initial Draft of a Possible Declarative Semantics for the Language

    Get PDF
    This article introduces a preliminary declarative semantics for a subset of the language Xcerpt (so-called grouping-stratifiable programs) in form of a classical (Tarski style) model theory, adapted to the specific requirements of Xcerpt’s constructs (e.g. the various aspects of incompleteness in query terms, grouping constructs in rule heads, etc.). Most importantly, the model theory uses term simulation as a replacement for term equality to handle incomplete term specifications, and an extended notion of substitutions in order to properly convey the semantics of grouping constructs. Based upon this model theory, a fixpoint semantics is also described, leading to a first notion of forward chaining evaluation of Xcerpt program

    Clone Detection and Elimination for Haskell

    Get PDF
    Duplicated code is a well known problem in software maintenance and refactoring. Code clones tend to increase program size and several studies have shown that duplicated code makes maintenance and code understanding more complex and time consuming. This paper presents a new technique for the detection and removal of duplicated Haskell code. The system is implemented within the refactoring framework of the Haskell Refactorer (HaRe), and uses an Abstract Syntax Tree (AST) based approach. Detection of duplicate code is automatic, while elimination is semi-automatic, with the user managing the clone removal. After presenting the system, an example is given to show how it works in practice

    The XML Query Language Xcerpt: Design Principles, Examples, and Semantics

    Get PDF
    Most query and transformation languages developed since the mid 90es for XML and semistructured data—e.g. XQuery [1], the precursors of XQuery [2], and XSLT [3]—build upon a path-oriented node selection: A node in a data item is specified in terms of a root-to-node path in the manner of the file selection languages of operating systems. Constructs inspired from the regular expression constructs , +, ?, and “wildcards” give rise to a flexible node retrieval from incompletely specified data items. This paper further introduces into Xcerpt, a query and transformation language further developing an alternative approach to querying XML and semistructured data first introduced with the language UnQL [4]. A metaphor for this approach views queries as patterns, answers as data items matching the queries. Formally, an answer to a query is defined as a simulation [5] of an instance of the query in a data item

    Transforming specifications of observable behaviour into programs

    Get PDF
    A methodology for deriving programs from specifications of observable behaviour is described. The class of processes to which this methodology is applicable includes those whose state changes are fully definable by labelled transition systems, for example communicating processes without internal state changes. A logic program representation of such labelled transition systems is proposed, interpreters based on path searching techniques are defined, and the use of partial evaluation techniques to derive the executable programs is described

    Non-reformist reform for Haskell Modularity

    Get PDF
    In this thesis, I present Backpack, a new language for building separately-typecheckable packages on top of a weak module system like Haskell’s. The design of Backpack is the first to bring the rich world of type systems to the practical world of packages via mixin modules. It’s inspired by the MixML module calculus of Rossberg and Dreyer but by choosing practicality over expressivity Backpack both simplifies that semantics and supports a flexible notion of applicative instantiation. Moreover, this design is motivated less by foundational concerns and more by the practical concern of integration into Haskell. The result is a new approach to writing modular software at the scale of packages.Modulsysteme wie die in Haskell erlauben nur eine weiche Art der Modularität, in dem Modulimplementierungen direkt von anderen Implementierungen abhängen und in dieser Abhängigkeitsreihenfolge verarbeitet werden müssen. Modulsysteme wie die in ML andererseits erlauben eine kräftige Art der Modularität, in dem explizite Schnittstellen Vermutungen über Abhängigkeiten ausdrücken und jeder Modultyp überprüft und unabhängig ergründet werden kann. In dieser Dissertation präsentiere ich Backpack, eine neue Sprache zur Entwicklung separattypenüberprüfbarer Pakete über einem weichen Modulsystem wie Haskells. Das Design von Backpack überführt erstmalig die reichhaltige Welt der Typsysteme in die praktische Welt der Pakete durch Mixin-Module. Es wird von der MixML-Kalkulation von Rossberg und Dreyer angeregt. Backpack vereinfacht allerdings diese Semantik durch die Auswahl von Anwendbarkeit statt Expressivität und fördert eine flexible Art von geeigneter Applicative- Instantiierung. Zudem wird dieses Design weniger von grundlegenden Anliegen als von dem praktischen Anliegen der Eingliederung in Haskell begründet. Die Semantik von Backpack wird durch die Ausarbeitung in Mengen von Haskell-Modulen und „binary interface files“ definiert, und zeigt so, wie Backpack Interoperabilität mit Haskell erhält, während Backpack es mit Schnittstellen nachrüstet. In meiner Formalisierung Backpacks präsentiere ich ein neuartiges Typsystem für Haskellmodule und überprüfe einen entscheidenen Korrektheitssatz, um die Semantik von Backpack zu validieren.Max Planck Institute for Software Systems (MPI-SWS
    corecore