519 research outputs found

    A Unifying Theory for Graph Transformation

    Get PDF
    The field of graph transformation studies the rule-based transformation of graphs. An important branch is the algebraic graph transformation tradition, in which approaches are defined and studied using the language of category theory. Most algebraic graph transformation approaches (such as DPO, SPO, SqPO, and AGREE) are opinionated about the local contexts that are allowed around matches for rules, and about how replacement in context should work exactly. The approaches also differ considerably in their underlying formal theories and their general expressiveness (e.g., not all frameworks allow duplication). This dissertation proposes an expressive algebraic graph transformation approach, called PBPO+, which is an adaptation of PBPO by Corradini et al. The central contribution is a proof that PBPO+ subsumes (under mild restrictions) DPO, SqPO, AGREE, and PBPO in the important categorical setting of quasitoposes. This result allows for a more unified study of graph transformation metatheory, methods, and tools. A concrete example of this is found in the second major contribution of this dissertation: a graph transformation termination method for PBPO+, based on decreasing interpretations, and defined for general categories. By applying the proposed encodings into PBPO+, this method can also be applied for DPO, SqPO, AGREE, and PBPO

    Semitopology: a new topological model of heterogeneous consensus

    Full text link
    A distributed system is permissionless when participants can join and leave the network without permission from a central authority. Many modern distributed systems are naturally permissionless, in the sense that a central permissioning authority would defeat their design purpose: this includes blockchains, filesharing protocols, some voting systems, and more. By their permissionless nature, such systems are heterogeneous: participants may only have a partial view of the system, and they may also have different goals and beliefs. Thus, the traditional notion of consensus -- i.e. system-wide agreement -- may not be adequate, and we may need to generalise it. This is a challenge: how should we understand what heterogeneous consensus is; what mathematical framework might this require; and how can we use this to build understanding and mathematical models of robust, effective, and secure permissionless systems in practice? We analyse heterogeneous consensus using semitopology as a framework. This is like topology, but without the restriction that intersections of opens be open. Semitopologies have a rich theory which is related to topology, but with its own distinct character and mathematics. We introduce novel well-behavedness conditions, including an anti-Hausdorff property and a new notion of `topen set', and we show how these structures relate to consensus. We give a restriction of semitopologies to witness semitopologies, which are an algorithmically tractable subclass corresponding to Horn clause theories, having particularly good mathematical properties. We introduce and study several other basic notions that are specific and novel to semitopologies, and study how known quantities in topology, such as dense subsets and closures, display interesting and useful new behaviour in this new semitopological context

    Improving Object-Oriented Programming by Integrating Language Features to Support Immutability

    Get PDF
    Nowadays developers consider Object-Oriented Programming (OOP) the de-facto general programming paradigm. While successful, OOP is not without problems. In 1994, Gamma et al. published a book with a set of 23 design patterns addressing recurring problems found in OOP software. These patterns are well-known in the industry and are taught in universities as part of software engineering curricula. Despite their usefulness in solving recurring problems, these design patterns bring a certain complexity in their implementation. That complexity is influenced by the features available in the implementation language. In this thesis, we want to decrease this complexity by focusing on the problems that design patterns attempt to solve and the language features that can be used to solve them. Thus, we aim to investigate the impact of specific language features on OOP and contribute guidelines to improve OOP language design. We first perform a mapping study to catalogue the language features that have been proposed in the literature to improve design pattern implementations. From those features, we focus on investigating the impact of immutability-related features on OOP. We then perform an exploratory study measuring the impact of introducing immutability in OOP software with the objective of establishing the advantages and drawbacks of using immutability in the context of OOP. Results indicate that immutability may produce more granular and easier-to-understand programs. We also perform an experiment to measure the impact of new language features added into the C\# language for better immutability support. Results show that these specific language features facilitate developers' tasks when aiming to implement immutability in OOP. We finally present a new design pattern aimed at solving a problem with method overriding in the context of immutable hierarchies of objects. We discuss the impact of language features on the implementations of this pattern by comparing these implementations in different programming languages, including Clojure, Java, and Kotlin. Finally, we implement these language features as a language extension to Common Lisp and discuss their usage

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Modular Collaborative Program Analysis

    Get PDF
    With our world increasingly relying on computers, it is important to ensure the quality, correctness, security, and performance of software systems. Static analysis that computes properties of computer programs without executing them has been an important method to achieve this for decades. However, static analysis faces major chal- lenges in increasingly complex programming languages and software systems and increasing and sometimes conflicting demands for soundness, precision, and scalability. In order to cope with these challenges, it is necessary to build static analyses for complex problems from small, independent, yet collaborating modules that can be developed in isolation and combined in a plug-and-play manner. So far, no generic architecture to implement and combine a broad range of dissimilar static analyses exists. The goal of this thesis is thus to design such an architecture and implement it as a generic framework for developing modular, collaborative static analyses. We use several, diverse case-study analyses from which we systematically derive requirements to guide the design of the framework. Based on this, we propose the use of a blackboard-architecture style collaboration of analyses that we implement in the OPAL framework. We also develop a formal model of our architectures core concepts and show how it enables freely composing analyses while retaining their soundness guarantees. We showcase and evaluate our architecture using the case-study analyses, each of which shows how important and complex problems of static analysis can be addressed using a modular, collaborative implementation style. In particular, we show how a modular architecture for the construction of call graphs ensures consistent soundness of different algorithms. We show how modular analyses for different aspects of immutability mutually benefit each other. Finally, we show how the analysis of method purity can benefit from the use of other complex analyses in a collaborative manner and from exchanging different analysis implementations that exhibit different characteristics. Each of these case studies improves over the respective state of the art in terms of soundness, precision, and/or scalability and shows how our architecture enables experimenting with and fine-tuning trade-offs between these qualities

    A Term-based Approach for Generating Finite Automata from Interaction Diagrams

    Full text link
    Non-deterministic Finite Automata (NFA) represent regular languages concisely, increasing their appeal for applications such as word recognition. This paper proposes a new approach to generate NFA from an interaction language such as UML Sequence Diagrams or Message Sequence Charts. Via an operational semantics, we generate a NFA from a set of interactions reachable using the associated execution relation. In addition, by applying simplifications on reachable interactions to merge them, it is possible to obtain reduced NFA without relying on costly NFA reduction techniques. Experimental results regarding NFA generation and their application in trace analysis are also presented.Comment: 29 pages (15 pages paper, 3 pages references, 11 pages appendix) 9 figures in paper, 14 figures in appendi

    Realisability of Global Models of Interaction (Extended Version)

    Get PDF
    We consider global models of communicating agents specified as transition systems labelled by interactions in which multiple senders and receivers can participate. A realisation of such a model is a set of local transition systems—one per agent—which are executed concurrently using synchronous communication. Our core challenge is how to check whether a global model is realisable and, if it is, how to synthesise a realisation. We identify and compare two variants to realise global interaction models, both relying on bisimulation equivalence. Then we investigate, for both variants, realisability conditions to be checked on global models. We propose a synthesis method for the construction of realisations by grouping locally indistinguishable states. The paper is accompanied by a tool that implements realisability checks and synthesises realisations. This document extends a publication accepted at the International Colloquium on Theoretical Aspects of Computing 2023 (ICTAC 2023), including the proofs of all results, more examples, and a more detailed explanation of the companion prototype tool

    On problematic practice of using normalization in Self-modeling/Multivariate Curve Resolution (S/MCR)

    Full text link
    The paper is briefly dealing with greater or lesser misused normalization in self-modeling/multivariate curve resolution (S/MCR) practice. The importance of the correct use of the ode solvers and apt kinetic illustrations are elucidated. The new terms, external and internal normalizations are defined and interpreted. The problem of reducibility of a matrix is touched. Improper generalization/development of normalization-based methods are cited as examples. The position of the extreme values of the signal contribution function is clarified. An Executable Notebook with Matlab Live Editor was created for algorithmic explanations and depictions

    Quantitative Verification and Synthesis of Resilient Networks

    Get PDF
    • …
    corecore