23 research outputs found

    Well-Formed and Scalable Invasive Software Composition

    Get PDF
    Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons

    Verification and Application of Program Transformations

    Get PDF
    A programtranszformáció és a refaktorálás alapvető elemei a szoftverfejlesztési folyamatnak. A refaktorálást a kezdetektől próbálják szoftvereszközökkel támogatni, amelyek megbízhatóan és hatékonyan valósítják meg a szoftverminőséget javító, a működést nem érintő programtranszformációkat. A statikus elemzésre alapuló hibakeresés és a refaktorálási transzformációk az akadémiában és a kutatás-fejlesztésben is nagy érdeklődésre tartanak számot, ám még ennél is fontosabb a szerepük a nagy bonyolultságú szoftvereket készítő vállalatoknál. Egyre pontosabbak és megbízhatóbbak a szoftverfejlesztést támogató eszközök, de bőven van még min javítani. A disszertáció olyan definíciós és verifikációs módszereket tárgyal, amelyekkel megbízhatóbb és szélesebb körben használt programtranszformációs eszközöket tudunk készíteni. A dolgozat a statikus és a dinamikus verifikációt is érinti. Elsőként egy újszerű, tömör leíró nyelvet mutat be L-attribútum grammatikákhoz, amelyet tulajdonságalapú teszteléshez használt véletlenszerű adatgenerátorra képezünk le. Ehhez egy esettanulmány társul, amely az Erlang programozási nyelv grammatikáját, majd a teszteléshez való felhasználását mutatja be. A tesztelés mellett a formális helyességbizonyítás kérdését is vizsgáljuk, ehhez bevezetünk egy refaktorálások leírására szolgáló nyelvet, amelyben végrehajtható és automatikusan bizonyítható specifikációkat tudunk megadni. A nyelv környezetfüggő és feltételes termátíráson, stratégiákon és úgynevezett refaktorálási sémákon alapszik. Végül, de nem utolsó sorban a programtranszformációk egy speciális alkalmazása kerül bemutatásra, amikor egy refaktoráló keretrendszert előfordítóként használunk a feldolgozott programozási nyelv kiterjesztésére. Utóbbi módszerrel könnyen implementálható az Erlang nyelvben a kódmigráció

    Lambda-calculus and formal language theory

    Get PDF
    Formal and symbolic approaches have offered computer science many application fields. The rich and fruitful connection between logic, automata and algebra is one such approach. It has been used to model natural languages as well as in program verification. In the mathematics of language it is able to model phenomena ranging from syntax to phonology while in verification it gives model checking algorithms to a wide family of programs. This thesis extends this approach to simply typed lambda-calculus by providing a natural extension of recognizability to programs that are representable by simply typed terms. This notion is then applied to both the mathematics of language and program verification. In the case of the mathematics of language, it is used to generalize parsing algorithms and to propose high-level methods to describe languages. Concerning program verification, it is used to describe methods for verifying the behavioral properties of higher-order programs. In both cases, the link that is drawn between finite state methods and denotational semantics provide the means to mix powerful tools coming from the two worlds

    Evaluation of XPath Queries against XML Streams

    Get PDF
    XML is nowadays the de facto standard for electronic data interchange on the Web. Available XML data ranges from small Web pages to ever-growing repositories of, e.g., biological and astronomical data, and even to rapidly changing and possibly unbounded streams, as used in Web data integration and publish-subscribe systems. Animated by the ubiquity of XML data, the basic task of XML querying is becoming of great theoretical and practical importance. The last years witnessed efforts as well from practitioners, as also from theoreticians towards defining an appropriate XML query language. At the core of this common effort has been identified a navigational approach for information localization in XML data, comprised in a practical and simple query language called XPath. This work brings together the two aforementioned ``worlds'', i.e., the XPath query evaluation and the XML data streams, and shows as well theoretical as also practical relevance of this fusion. Its relevance can not be subsumed by traditional database management systems, because the latter are not designed for rapid and continuous loading of individual data items, and do not directly support the continuous queries that are typical for stream applications. The first central contribution of this work consists in the definition and the theoretical investigation of three term rewriting systems to rewrite queries with reverse predicates, like parent or ancestor, into equivalent forward queries, i.e., queries without reverse predicates. Our rewriting approach is vital to the evaluation of queries with reverse predicates against unbounded XML streams, because neither the storage of past fragments of the stream, nor several stream traversals, as required by the evaluation of reverse predicates, are affordable. Beyond their declared main purpose of providing equivalences between queries with reverse predicates and forward queries, the applications of our rewriting systems shed light on other query language properties, like the expressivity of some of its fragments, the query minimization, or even the complexity of query evaluation. For example, using these systems, one can rewrite any graph query into an equivalent forward forest query. The second main contribution consists in a streamed and progressive evaluation strategy of forward queries against XML streams. The evaluation is specified using compositions of so-called stream processing functions, and is implemented using networks of deterministic pushdown transducers. The complexity of this evaluation strategy is polynomial in both the query and the data sizes for forward forest queries and even for a large fragment of graph queries. The third central contribution consists in two real monitoring applications that use directly the results of this work: the monitoring of processes running on UNIX computers, and a system for providing graphically real-time traffic and travel information, as broadcasted within ubiquitous radio signals

    Generation of interactive programming environments: GIPE

    Get PDF

    GP 2: Efficient Implementation of a Graph Programming Language

    Get PDF
    The graph programming language GP (Graph Programs) 2 and its implementation is the subject of this thesis. The language allows programmers to write visual graph programs at a high level of abstraction, bringing the task of solving graph-based problems to an environment in which the user feels comfortable and secure. Implementing graph programs presents two main challenges. The first challenge is translating programs from a high-level source code representation to executable code, which involves bridging the gap from a non-deterministic program to deterministic machine code. The second challenge is overcoming the theoretically impractical complexity of applying graph transformation rules, the basic computation step of a graph program. The work presented in this thesis addresses both of these challenges. We tackle the first challenge by implementing a compiler that translates GP 2 graph programs directly to C code. Implementation strategies concerning the storage and access of internal data structures are empirically compared to determine the most efficient approach for executing practical graph programs. The second challenge is met by extending the double-pushout approach to graph transformation with root nodes to support fast execution of graph transformation rules by restricting the search to the local neighbourhood of the root nodes in the host graph. We add this theoretical construct to the GP 2 language in order to support rooted graph transformation rules, and we identify a class of rooted rules that are applicable in constant time on certain classes of graphs. Finally, we combine theory and practice by writing rooted graph programs to solve two common graph algorithms, and demonstrate that their execution times are capable of matching the execution times of tailored C solutions

    Entwurf und Implementation einer auf Graph-Grammatiken beruhenden Sprache zur Funktions-Struktur-Modellierung von Pflanzen

    Get PDF
    Increasing biological knowledge requires more and more elaborate methods to translate the knowledge into executable model descriptions, and increasing computational power allows to actually execute these descriptions. Such a simulation helps to validate, extend and question the knowledge. For plant modelling, the well-established formal description language of Lindenmayer systems reaches its limits as a method to concisely represent current knowledge and to conveniently assist in current research. On one hand, it is well-suited to represent structural and geometric aspects of plant models - of which units is a plant composed, how are these connected, what is their location in 3D space -, but on the other hand, its usage to describe functional aspects - what internal processes take place in the plant structure, how does this interact with the structure - is not as convenient as desirable. This can be traced back to the underlying representation of structure as a linear chain of units, while the intrinsic nature of the structure is a tree or even a graph. Therefore, we propose to use graphs and graph grammars as a basis for plant modelling which combines structural and functional aspects. In the first part of this thesis, we develop the necessary theoretical framework. Starting with a presentation of the state of the art concerning Lindenmayer systems and graph grammars, we develop the formalism of relational growth grammars as a variant of graph grammars. We show that this formalism has a natural embedding of Lindenmayer systems which keeps all relevant properties, but represents branched structures directly as axial trees and not as linear chains with indirect encoding of branches. In the second part, we develop the main practical result, the XL programming language as an extension of the Java programming language by very general rule-based features. Short examples illustrate the application of the new language features. We describe the built-in pattern matching algorithm of the implemented run-time system for the XL programming language, and we sketch a possible implementation of an XL compiler. The third part is an application of relational growth grammars and the XL programming language. We show how the general XL interfaces can be customized for relational growth grammars. On top of this customization, several examples from a variety of disciplines demonstrate the usefulness of the developed formalism and language to describe plant growth, especially functional-structural plant models, but also artificial life, architecture or interactive games. Some examples operate on custom graphs like XML DOM trees or scene graphs of commercial 3D modellers, while the majority uses the 3D modelling platform GroIMP, a software developed in conjunction with this thesis. The appendix gives an overview of the GroIMP software. The practical usage of its plug-in for relational growth grammars is also illustrated.Das zunehmende Wissen über biologische Prozesse verlangt nach geeigneten Methoden, es in ausführbare Modelle zu übersetzen, und die zunehmende Rechenleistung der Computer ermöglicht es, diese Modelle auch tatsächlich auszuführen. Solche Simulationen dienen zur Validierung, Erweiterung und Hinterfragung des Wissens. Speziell für die Pflanzenmodellierung wurden Lindenmayer-Systeme mit Erfolg eingesetzt, jedoch stoßen diese bei aktuellen Modellierungsproblemen und Forschungsvorhaben an ihre Grenzen. Zwar sind sie gut geeignet, Pflanzenstruktur und Geometrie abzubilden - aus welchen Einheiten setzt sich eine Pflanze zusammen, wie sind diese verbunden, wie ist ihre räumliche Lage -, aber die lineare Datenstruktur erschwert die Integration von Funktionsmodellen, welche Prozesse innerhalb der verzweigten Struktur und des beanspruchten Raumes beschreiben. Daher wird in dieser Arbeit vorgeschlagen, anstelle der linearen Stuktur Graphen und Graph-Grammatiken als Grundlage für die kombinierte Funktions-Struktur-Modellierung von Pflanzen zu verwenden. Im ersten Teil der Dissertation wird der theoretische Unterbau entwickelt. Nach einer Vorstellung des aktuellen Wissensstandes auf dem Gebiet der Lindenmayer-Systeme und Graph-Grammatiken werden relationale Wachstumsgrammatiken eingeführt, die auf bekannten Mechanismen für parallele Graph-Grammatiken aufbauen und Lindenmayer-Systeme als Spezialfall enthalten, dabei jedoch verzweigte Strukturen direkt als axiale Bäume darstellen. Zur praktischen Anwendung wird im zweiten Teil die Programmiersprache XL entwickelt, die Java um allgemein gehaltene Sprachkonstrukte für Graph-Grammatiken erweitert. Kurze Beispiele zeigen die Anwendung der neuen Sprachmerkmale. Der Algorithmus zur Mustersuche wird erläutert, und die Implementation des XL-Compilers wird vorgestellt. Im dritten Teil werden mögliche Anwendungen relationaler Wachstumsgrammatiken aufgezeigt. Dazu werden zunächst die allgemeinen XL-Schnittstellen für relationale Wachstumsgrammatiken konkretisiert, um dieses System dann für Modelle aus verschiedenen Bereichen zu nutzen, darunter Funktions-Struktur-Modelle von Pflanzen, Künstliches Leben, Architektur und interaktive Spiele. Einige Beispiele nutzen spezifische Graphen wie XML-DOM-Bäume oder Szenengraphen kommerzieller 3D-Modellierprogramme, aber der überwiegende Teil baut auf der 3D-Plattform GroIMP auf, die zusammen mit dieser Dissertation entwickelt wurde. Im Anhang wird die Software GroIMP kurz vorgestellt und ihre praktische Anwendung für relationale Wachstumsgrammatiken erläutert

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The total of 60 regular papers presented in these volumes was carefully reviewed and selected from 155 submissions. The papers are organized in topical sections as follows: Part I: Program verification; SAT and SMT; Timed and Dynamical Systems; Verifying Concurrent Systems; Probabilistic Systems; Model Checking and Reachability; and Timed and Probabilistic Systems. Part II: Bisimulation; Verification and Efficiency; Logic and Proof; Tools and Case Studies; Games and Automata; and SV-COMP 2020
    corecore