1,840 research outputs found

    Semantic Bidirectionalization Revisited

    Get PDF
    A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. Over the years, a lot of effort has been made to offer better language support for programming such transformations, essentially allowing the programmers to construct one mapping of the pair and have the other automatically generated. As an alternative to creating specialized new languages, one can try to analyse and transform programs written in general purpose languages, and "bidirectionalize" them. Among others, a technique termed as semantic bidirectionalization stands out in term of user-friendliness. The unidirectional program can be written using arbitrary language constructs, as long as the function is polymorphic and the language constructs respect parametricity. The free theorem that follows from the polymorphic type of the program allows a kind of forensic examination of the transformation, determining its effect without examining its implementation. This is convenient, in the sense that the programmer is not restricted to using a particular syntax; but it does require the transformation to be polymorphic. In this paper, we revisit the idea of semantic bidirectionalization and reveal the elegant principles behind the current state-of-the-art techniques. Guided by the findings, we derive much simpler implementations that scale easily

    Bidirectionalization for Free with Runtime Recording: Or, a Light-Weight Approach to the View-Update Problem

    Get PDF
    A bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. Over the years, a lot of effort has been made to offer better language support for programming such transformations. In particular, a technique known as bidirectionalization is able to analyze and transform unidirectional programs written in general purpose languages, and "bidirectionalize" them. Among others, a technique termed as semantic bidirectionalization proposed by Voigtländer stands out in term of user-friendliness. The unidirectional program can be written using arbitrary language constructs, as long as the function is polymorphic and the language constructs respect parametricity. The free theorems that follow from the polymorphic type of the program allow a kind of forensic examination of the transformation, determining its effect without examining its implementation. This is convenient, in the sense that the programmer is not restricted to using a particular syntax; but it does require the transformation to be polymorphic. In this paper, we lift this polymorphism requirement to improve the applicability of semantic bidirectionalization. Concretely, we provide a type class PackM γ α μ, which intuitively reads "a concrete datatype γ is abstracted to a type α, and the 'observations' made by a transformation on values of type γ are recorded by a monad μ". With PackM, we turn monomorphic transformations into polymorphic ones, that are ready to be bidirectionalized. We demonstrate our technique with a case study of standard XML queries, which were considered beyond semantic bidirectionalization because of their monomorphic nature

    Linguistic Reflection in Java

    Get PDF
    Reflective systems allow their own structures to be altered from within. Here we are concerned with a style of reflection, called linguistic reflection, which is the ability of a running program to generate new program fragments and to integrate these into its own execution. In particular we describe how this kind of reflection may be provided in the compiler-based, strongly typed object-oriented programming language Java. The advantages of the programming technique include attaining high levels of genericity and accommodating system evolution. These advantages are illustrated by an example taken from persistent programming which shows how linguistic reflection allows functionality (program code) to be generated on demand (Just-In-Time) from a generic specification and integrated into the evolving running program. The technique is evaluated against alternative implementation approaches with respect to efficiency, safety and ease of use.Comment: 25 pages. Source code for examples at http://www-ppg.dcs.st-and.ac.uk/Java/ReflectionExample/ Dynamic compilation package at http://www-ppg.dcs.st-and.ac.uk/Java/DynamicCompilation

    FORMAL SPECIFICATIONS AND COMMAND MODELING IN SOFTWARE SYSTEMS WITH A COMPLEX COMMAND STRUCTURE

    Get PDF
    Commands are an important part of large scale industrial software specifications, especially where the specification is separated from its implementation as in open software standards. Commands can be complex because of large numbers of parameters, dependencies among parameters, subtle side effects, and lack of abstraction. We present a formal approach for command modeling and apply it to IBM\u27s Distributed Data Management Architecture (DDM), a complex, large scale specification of data access on remote and heterogeneous IBM systems. Our approach consists of three parts: a declarative, executable command specification language, an incremental specification technique, and automated reasoning tools. The command specification language provides a formal interpretation of the structural (input-output) and behavioral properties (state constraints/change) of commands. To manage the details of complex commands with numerous inter-dependent arguments, a novel incremental specification technique and several tools for incremental definition and browsing are presented. Two forms of automated reasoning are also demonstrated: type checking to ensure well-typed expressions and target system tracing to simulate command execution. Lessons learned from our experience with the DDM are also discussed

    Automatic Verification of Transactions on an Object-Oriented Database

    Get PDF
    In the context of the object-oriented data model, a compiletime approach is given that provides for a significant reduction of the amount of run-time transaction overhead due to integrity constraint checking. The higher-order logic Isabelle theorem prover is used to automatically prove which constraints might, or might not be violated by a given transaction in a manner analogous to the one used by Sheard and Stemple (1989) for the relational data model. A prototype transaction verification tool has been implemented, which automates the semantic mappings and generates proof goals for Isabelle. Test results are discussed to illustrate the effectiveness of our approach

    Modern techniques for constraint solving the CASPER experience

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaConstraint programming is a well known paradigm for addressing combinatorial problems which has enjoyed considerable success for solving many relevant industrial and academic problems. At the heart of constraint programming lies the constraint solver, a computer program which attempts to find a solution to the problem, i.e. an assignment of all the variables in the problemsuch that all the constraints are satisfied. This dissertation describes a set of techniques to be used in the implementation of a constraint solver. These techniques aim at making a constraint solver more extensible and efficient,two properties which are hard to integrate in general, and in particular within a constraint solver. Specifically, this dissertation addresses two major problems: generic incremental propagation and propagation of arbitrary decomposable constraints. For both problemswe present a set of techniques which are novel, correct, and directly concerned with extensibility and efficiency. All the material in this dissertation emerged from our work in designing and implementing a generic constraint solver. The CASPER (Constraint Solving Platformfor Engineering and Research)solver does not only act as a proof-of-concept for the presented techniques, but also served as the common test platform for the many discussed theoretical models. Besides the work related to the design and implementation of a constraint solver, this dissertation also presents the first successful application of the resulting platform for addressing an open research problem, namely finding good heuristics for efficiently directing search towards a solution

    Reify Your Collection Queries for Modularity and Speed!

    Full text link
    Modularity and efficiency are often contradicting requirements, such that programers have to trade one for the other. We analyze this dilemma in the context of programs operating on collections. Performance-critical code using collections need often to be hand-optimized, leading to non-modular, brittle, and redundant code. In principle, this dilemma could be avoided by automatic collection-specific optimizations, such as fusion of collection traversals, usage of indexing, or reordering of filters. Unfortunately, it is not obvious how to encode such optimizations in terms of ordinary collection APIs, because the program operating on the collections is not reified and hence cannot be analyzed. We propose SQuOpt, the Scala Query Optimizer--a deep embedding of the Scala collections API that allows such analyses and optimizations to be defined and executed within Scala, without relying on external tools or compiler extensions. SQuOpt provides the same "look and feel" (syntax and static typing guarantees) as the standard collections API. We evaluate SQuOpt by re-implementing several code analyses of the Findbugs tool using SQuOpt, show average speedups of 12x with a maximum of 12800x and hence demonstrate that SQuOpt can reconcile modularity and efficiency in real-world applications.Comment: 20 page
    • …
    corecore