45 research outputs found

    Language integrated relational lenses

    Get PDF
    Relational databases are ubiquitous. Such monolithic databases accumulate large amounts of data, yet applications typically only work on small portions of the data at a time. A subset of the database defined as a computation on the underlying tables is called a view. Querying views is helpful, but it is also desirable to update them and have these changes be applied to the underlying database. This view update problem has been the subject of much previous work before, but support by database servers is limited and only rarely available. Lenses are a popular approach to bidirectional transformations, a generalization of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this proposal has not been implemented or evaluated prior to the work reported in this thesis. This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates, without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development becomes easier and less error-prone, avoiding the impedance mismatch of having two programming languages. Integrating relational lenses into the language poses additional challenges. As defined by Bohannon et al. relational lenses completely recompute the database, making them inefficient as the database scales. The other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite abstract sets and define the type checking rules using relational algebra. Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes to the source tables. We prove that our incremental semantics are functionally equivalent to the non-incremental semantics, and our experimental results show orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches. This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed functional relational lens expression can derive a well-typed sequential lens. We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler. These two implementations attest to the possibility of turning relational lenses into a practical language feature

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Pushdown Normal-Form Bisimulation: A Nominal Context-Free Approach to Program Equivalence

    Full text link
    We propose Pushdown Normal Form (PDNF) Bisimulation to verify contextual equivalence in higher-order functional programming languages with local state. Similar to previous work on Normal Form (NF) bisimulation, PDNF Bisimulation is sound and complete with respect to contextual equivalence. However, unlike traditional NF Bisimulation, PDNF Bisimulation is also decidable for a class of program terms that reach bounded configurations but can potentially have unbounded call stacks and input an unbounded number of unknown functions from their context. Our approach relies on the principle that, in model-checking for reachability, pushdown systems can be simulated by finite-state automata designed to accept their initial/final stack content. We embody this in a stackless Labelled Transition System (LTS), together with an on-the-fly saturation procedure for call stacks, upon which bisimulation is defined. To enhance the effectiveness of our bisimulation, we develop up-to techniques and confirm their soundness for PDNF Bisimulation. We develop a prototype implementation of our technique which is able to verify equivalence in examples from practice and the literature that were out of reach for previous work

    Staged Compilation with Two-Level Type Theory

    Full text link
    The aim of staged compilation is to enable metaprogramming in a way such that we have guarantees about the well-formedness of code output, and we can also mix together object-level and meta-level code in a concise and convenient manner. In this work, we observe that two-level type theory (2LTT), a system originally devised for the purpose of developing synthetic homotopy theory, also serves as a system for staged compilation with dependent types. 2LTT has numerous good properties for this use case: it has a concise specification, well-behaved model theory, and it supports a wide range of language features both at the object and the meta level. First, we give an overview of 2LTT's features and applications in staging. Then, we present a staging algorithm and prove its correctness. Our algorithm is "staging-by-evaluation", analogously to the technique of normalization-by-evaluation, in that staging is given by the evaluation of 2LTT syntax in a semantic domain. The staging algorithm together with its correctness constitutes a proof of strong conservativity of 2LLT over the object theory. To our knowledge, this is the first description of staged compilation which supports full dependent types and unrestricted staging for types

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Formal Foundations for Information-Preserving Model Synchronization Processes Based on Triple Graph Grammars

    Get PDF
    Zwischen verschiedenen Artefakten, die Informationen teilen, wieder Konsistenz herzustellen, nachdem eines von ihnen geĂ€ndert wurde, ist ein wichtiges Problem, das in verschiedenen Bereichen der Informatik auftaucht. Mit dieser Dissertation legen wir eine Lösung fĂŒr das grundlegende Modellsynchronisationsproblem vor. Bei diesem Problem ist ein Paar solcher Artefakte (Modelle) gegeben, von denen eines geĂ€ndert wurde; Aufgabe ist die Wiederherstellung der Konsistenz. Tripelgraphgrammatiken (TGGs) sind ein etablierter und geeigneter Formalismus, um dieses und verwandte Probleme anzugehen. Da sie auf der algebraischen Theorie der Graphtransformation und dem (Double-)Pushout Zugang zu Ersetzungssystemen basieren, sind sie besonders geeignet, um Lösungen zu entwickeln, deren Eigenschaften formal bewiesen werden können. Doch obwohl TGG-basierte AnsĂ€tze etabliert sind, leiden viele von ihnen unter dem Problem des Informationsverlustes. Wenn ein Modell geĂ€ndert wurde, können wĂ€hrend eines Synchronisationsprozesses Informationen verloren gehen, die nur im zweiten Modell vorliegen. Das liegt daran, dass solche Synchronisationsprozesse darauf zurĂŒckfallen Konsistenz dadurch wiederherzustellen, dass sie das geĂ€nderte Modell (bzw. große Teile von ihm) neu ĂŒbersetzen. Wir schlagen einen TGG-basierten Ansatz vor, der fortgeschrittene Features von TGGs unterstĂŒtzt (Attribute und negative Constraints), durchgĂ€ngig formalisiert ist, implementiert und inkrementell in dem Sinne ist, dass er den Informationsverlust im Vergleich mit vorherigen AnsĂ€tzen drastisch reduziert. Bisher gibt es keinen TGG-basierten Ansatz mit vergleichbaren Eigenschaften. Zentraler Beitrag dieser Dissertation ist es, diesen Ansatz formal auszuarbeiten und seine wesentlichen Eigenschaften, nĂ€mlich Korrektheit, VollstĂ€ndigkeit und Termination, zu beweisen. Die entscheidende neue Idee unseres Ansatzes ist es, Reparaturregeln anzuwenden. Dies sind spezielle Regeln, die es erlauben, Änderungen an einem Modell direkt zu propagieren anstatt auf NeuĂŒbersetzung zurĂŒckzugreifen. Um diese Reparaturregeln erstellen und anwenden zu können, entwickeln wir grundlegende BeitrĂ€ge zur Theorie der algebraischen Graphtransformation. ZunĂ€chst entwickeln wir eine neue Art der sequentiellen Komposition von Regeln. Im Gegensatz zur gewöhnlichen Komposition, die zu Regeln fĂŒhrt, die Elemente löschen und dann wieder neu erzeugen, können wir Regeln herleiten, die solche Elemente stattdessen bewahren. Technisch gesehen findet der Synchronisationsprozess, den wir entwickeln, außerdem in der Kategorie der partiellen Tripelgraphen statt und nicht in der der normalen Tripelgraphen. Daher mĂŒssen wir sicherstellen, dass die fĂŒr Double-Pushout-Ersetzungssysteme ausgearbeitete Theorie immer noch gĂŒltig ist. Dazu entwickeln wir eine (kategorientheoretische) Konstruktion neuer Kategorien aus gegebenen und zeigen, dass (i) diese Konstruktion die Axiome erhĂ€lt, die nötig sind, um die Theorie fĂŒr Double-Pushout-Ersetzungssysteme zu entwickeln, und (ii) partielle Tripelgraphen als eine solche Kategorie konstruiert werden können. Zusammen ermöglichen diese beiden grundsĂ€tzlichen BeitrĂ€ge es uns, unsere Lösung fĂŒr das grundlegende Modellsynchronisationsproblem vollstĂ€ndig formal auszuarbeiten und ihre zentralen Eigenschaften zu beweisen.Restoring consistency between different information-sharing artifacts after one of them has been changed is an important problem that arises in several areas of computer science. In this thesis, we provide a solution to the basic model synchronization problem. There, a pair of such artifacts (models), one of which has been changed, is given and consistency shall be restored. Triple graph grammars (TGGs) are an established and suitable formalism to address this and related problems. Being based on the algebraic theory of graph transformation and (double-)pushout rewriting, they are especially suited to develop solutions whose properties can be formally proven. Despite being established, many TGG-based solutions do not satisfactorily deal with the problem of information loss. When one model is changed, in the process of restoring consistency such solutions may lose information that is only present in the second model because the synchronization process resorts to restoring consistency by re-translating (large parts of) the updated model. We introduce a TGG-based approach that supports advanced features of TGGs (attributes and negative constraints), is comprehensively formalized, implemented, and is incremental in the sense that it drastically reduces the amount of information loss compared to former approaches. Up to now, a TGG-based approach with these characteristics is not available. The central contribution of this thesis is to formally develop that approach and to prove its essential properties, namely correctness, completeness, and termination. The crucial new idea in our approach is the use of repair rules, which are special rules that allow one to directly propagate changes from one model to the other instead of resorting to re-translation. To be able to construct and apply these repair rules, we contribute more fundamentally to the theory of algebraic graph transformation. First, we develop a new kind of sequential rule composition. Whereas the conventional composition of rules leads to rules that delete and re-create elements, we can compute rules that preserve such elements instead. Furthermore, technically the setting in which the synchronization process we develop takes place is the category of partial triple graphs and not the one of ordinary triple graphs. Hence, we have to ensure that the elaborate theory of double-pushout rewriting still applies. Therefore, we develop a (category-theoretic) construction of new categories from given ones and show that (i) this construction preserves the axioms that are necessary to develop the theory of double-pushout rewriting and (ii) partial triple graphs can be constructed as such a category. Together, those two more fundamental contributions enable us to develop our solution to the basic model synchronization problem in a fully formal manner and to prove its central properties

    Language-Based Techniques for Secure Programming

    Get PDF
    Secure Computation (SC) encompasses many different cryptographic techniques for computing over encrypted data. In particular, Secure Multiparty Computation enables multiple parties to jointly compute a function over their secret inputs. MPC languages offer programmers a familiar environment in which to express their programs, but fall short when confronted with problems that require flexible coordination. More broadly, SC languages do not protect non-expert programmers from violating obliviousness or expected bounds on information leakage. We aim to show that secure programming can be made safer through language-based techniques for expressive, coordinated MPC; probabilistically oblivious execution; and quantitative analysis of information flow. We begin by presenting Symphony, an expressive MPC language that provides flexible coordination of many parties, which has been used to implement the secure shuffle of Laur, Willemson, and Zhang. Next, we present λObliv, a core language guaranteeing that well-typed programs are probabilistically oblivious, which has been used to type check tree-based, nonrecursive ORAM (NORAM). Finally, we present a novel application of dynamic analysis techniques to an existing system for enforcing bounds on information leakage, providing a better balance of precision and performance

    On Pitts' Relational Properties of Domains

    Full text link
    Andrew Pitts' framework of relational properties of domains is a powerful method for defining predicates or relations on domains, with applications ranging from reasoning principles for program equivalence to proofs of adequacy connecting denotational and operational semantics. Its main appeal is handling recursive definitions that are not obviously well-founded: as long as the corresponding domain is also defined recursively, and its recursion pattern lines up appropriately with the definition of the relations, the framework can guarantee their existence. Pitts' original development used the Knaster-Tarski fixed-point theorem as a key ingredient. In these notes, I show how his construction can be seen as an instance of other key fixed-point theorems: the inverse limit construction, the Banach fixed-point theorem and the Kleene fixed-point theorem. The connection underscores how Pitts' construction is intimately tied to the methods for constructing the base recursive domains themselves, and also to techniques based on guarded recursion, or step-indexing, that have become popular in the last two decades

    Ernst Denert Award for Software Engineering 2020

    Get PDF
    This open access book provides an overview of the dissertations of the eleven nominees for the Ernst Denert Award for Software Engineering in 2020. The prize, kindly sponsored by the Gerlind & Ernst Denert Stiftung, is awarded for excellent work within the discipline of Software Engineering, which includes methods, tools and procedures for better and efficient development of high quality software. An essential requirement for the nominated work is its applicability and usability in industrial practice. The book contains eleven papers that describe the works by Jonathan BrachthĂ€user (EPFL Lausanne) entitled What You See Is What You Get: Practical Effect Handlers in Capability-Passing Style, Mojdeh Golagha’s (Fortiss, Munich) thesis How to Effectively Reduce Failure Analysis Time?, Nikolay Harutyunyan’s (FAU Erlangen-NĂŒrnberg) work on Open Source Software Governance, Dominic Henze’s (TU Munich) research about Dynamically Scalable Fog Architectures, Anne Hess’s (Fraunhofer IESE, Kaiserslautern) work on Crossing Disciplinary Borders to Improve Requirements Communication, Istvan Koren’s (RWTH Aachen U) thesis DevOpsUse: A Community-Oriented Methodology for Societal Software Engineering, Yannic Noller’s (NU Singapore) work on Hybrid Differential Software Testing, Dominic Steinhofel’s (TU Darmstadt) thesis entitled Ever Change a Running System: Structured Software Reengineering Using Automatically Proven-Correct Transformation Rules, Peter WĂ€gemann’s (FAU Erlangen-NĂŒrnberg) work Static Worst-Case Analyses and Their Validation Techniques for Safety-Critical Systems, Michael von Wenckstern’s (RWTH Aachen U) research on Improving the Model-Based Systems Engineering Process, and Franz Zieris’s (FU Berlin) thesis on Understanding How Pair Programming Actually Works in Industry: Mechanisms, Patterns, and Dynamics – which actually won the award. The chapters describe key findings of the respective works, show their relevance and applicability to practice and industrial software engineering projects, and provide additional information and findings that have only been discovered afterwards, e.g. when applying the results in industry. This way, the book is not only interesting to other researchers, but also to industrial software professionals who would like to learn about the application of state-of-the-art methods in their daily work
    corecore