15 research outputs found

    Modal Interface Theories for Specifying Component-based Systems

    Get PDF
    Large software systems frequently manifest as complex, concurrent, reactive systems and their correctness is often crucial for the safety of the application. Hence, modern techniques of software engineering employ incremental, component-based approaches to systems design. These are supported by interface theories which may serve as specification languages and as semantic foundations for software product lines, web-services, the internet of things, software contracts and conformance testing. Interface theories enable a systems designer to express communication requirements of components on their environments and to reason about the mutual compatibility of these requirements in order to guarantee the communication safety of the system. Further, interface theories enrich traditional operational specification theories by declarative aspects such as conjunction and disjunction, which allow one to specify systems heterogeneously. However, substantial practical aspects of software verification are not supported by current interface theories, e.g., reusing components, adapting components to changed operational environments, reasoning about the compatibility of more than two components, modelling software product lines or tracking erroneous behaviour in safety-critical systems. The goal of this thesis is to investigate the theoretical foundations for making interface theories more practical by solving the above issues. Although partial solutions to some of these issues have been presented in the literature, none of them succeeds without sacrificing other desired features. The particular challenge of this thesis is to solve these problems simultaneously within a single interface theory. To this end, the arguably most general interface theory Modal Interface Automata (MIA) is extended, yielding the interface theory Error-preserving Modal Interface Automata (EMIA). The above problems are addressed as follows. Quotient operators are adjoint to composition and, therefore, support component reuse. Such a quotient operator is introduced to both MIA and EMIA. It is the first one that considers nondeterministic dividends and compatibility. Alphabet extension operators for MIA and EMIA allow for the change of operational environment by permitting one to adapt system components to new interactions without breaking previously satisfied requirements. Erroneous behavior is identified as a common source of problems with respect to the compatibility of more than two components, the modelling of software product lines and erroneous behaviour in safety-critical systems. EMIA improves on previous interface theories by providing a more precise semantics with respect to erroneous behaviour based on error-preservation. The relation between error-preservation and the usual error-abstraction employed in previous interface theories is investigated, establishing a Galois insertion from MIA into EMIA that is relevant at the levels of specifications, composition operations and proofs. The practical utility of interface theories is demonstrated by providing a software implementation of MIA and EMIA that is applied to two case studies. Further, an outlook is given on the relation between type checking and refinement checking. As a proof of concept, the simple interface theory Interface Automata is extended to a behavioural type theory where type checking is a syntactic approximation of refinement checking.Große Softwaresysteme bilden hĂ€ufig komplexe, nebenlĂ€ufige, reaktive Systeme, deren Korrektheit fĂŒr die Sicherheit der Anwendung entscheidend ist. Daher setzen moderne Verfahren der Softwaretechnik inkrementelle, komponentenbasierte AnsĂ€tze zum Software-Entwurf ein. Diese werden von Interface-Theorien unterstĂŒtzt, die als Spezifikationssprachen und semantische Grundlagen fĂŒr Softwareproduktlinien, Web-Services, das Internet der Dinge, Softwarekontrakte und Konformanztests dienen können. Interface-Theorien ermöglichen es, Kommunikationsanforderungen von Komponenten an ihre Umgebung auszudrĂŒcken, um die gegenseitige KompatibilitĂ€t dieser Anforderungen zu ĂŒberprĂŒfen und die Kommunikationssicherheit des Systems zu garantieren. Zudem erweitern Interface-Theorien traditionelle operationale Spezifikationstheorien um deklarative Aspekte wie beispielsweise Konjunktion und Disjunktion, die heterogenes Spezifizieren ermöglichen. Allerdings werden wesentliche praktische Aspekte der Softwareverifikation von Interface-Theorien nicht unterstĂŒtzt, z.B. das Wiederverwenden von Komponenten, das Anpassen von Komponenten an geĂ€nderte operationale Umgebungen, die KompatibilitĂ€tsprĂŒfung von mehr als zwei Komponenten, das Modellieren von Softwareproduktlinien oder das ZurĂŒckverfolgen von Fehlverhalten sicherheitskritischer Systeme. Diese Arbeit untersucht die theoretischen Grundlagen von Interface-Theorien mit dem Ziel, die oben genannten praktischen Probleme zu lösen. Obwohl es in der Literatur Teillösungen zu manchen dieser Probleme gibt, erreicht keine davon ihr Ziel, ohne andere wĂŒnschenswerte Eigenschaften aufzugeben. Die besondere Herausforderung dieser Arbeit besteht darin, diese Probleme innerhalb einer einzigen Interface-Theorie zugleich zu lösen. Zu diesem Zweck wurde die wohl allgemeinste Interface-Theorie Modal Interface Automata (MIA) zu der Interface-Theorie Error-preserving Modal Interface Automata (EMIA) weiterentwickelt. Die obigen Probleme werden wie folgt gelöst. Ein zur Komposition adjungierter Quotientenoperator, der das Wiederverwenden von Komponenten ermöglicht, wurde fĂŒr MIA und EMIA eingefĂŒhrt. Es handelt sich dabei um den ersten Quotientenoperator, der nichtdeterministische Dividenden und KompatibilitĂ€t betrachtet. Alphabeterweiterungsoperatoren erlauben eine Änderung der operationalen Umgebung, indem sie es ermöglichen, Komponenten an neue Interaktionen anzupassen, ohne zuvor erfĂŒllte Anforderungen zu missachten. Fehlerhaftes Verhalten wird als eine gemeinsame Ursache von Problemen bezĂŒglich der KompatibilitĂ€t von mehr als zwei Komponenten, der Modellierung von Softwareproduktlinien und des Fehlverhaltens sicherheitskritischer Systeme erkannt. EMIA verbessert bisherige Interface-Theorien durch eine prĂ€zisere Fehlersemantik, die auf dem Erhalten von Fehlern beruht. Als Beziehung zwischen diesem Fehlererhalt und der in bisherigen Interface-Theorien ĂŒblichen Fehlerabstraktion ergibt sich eine Galois-Einbettung von MIA in EMIA, die auf den Ebenen der Spezifikationen, Operatoren und Beweise relevant ist. Die praktische Anwendbarkeit von Interface-Theorien wird mittels einer Implementierung von MIA und EMIA als Software und deren Anwendung auf zwei Fallstudien demonstriert. Zudem wird das VerhĂ€ltnis zwischen Verfeinerung und TypprĂŒfung diskutiert. In einer Machbarkeitsstudie wurde die einfache Interface-Theorie Interface Automata zu einer Verhaltenstyptheorie erweitert, bei der die TypprĂŒfung eine syntaktische Approximation der Verfeinerung ist

    A Generalised Theory of Interface Automata, Component Compatibility and Error

    Get PDF
    Interface theories allow systems designers to reason about the composability and compatibility of concurrent system components. Such theories often extend both de Alfaro and Henzinger’s Interface Automata and Larsen’s Modal Transition Systems, which leads, however, to several issues that are undesirable in practice: an unintuitive treatment of specified unwanted behaviour, a binary compatibility concept that does not scale to multi-component assemblies, and compatibility guarantees that are insufficient for software product lines. In this paper we show that communication mismatches are central to all these problems and, thus, the ability to represent such errors semantically is an important feature of an interface theory. Accordingly, we present the error-aware interface theory EMIA, where the above shortcomings are remedied by introducing explicit fatal error states. In addition, we prove via a Galois insertion that EMIA is a conservative generalisation of the established MIA (Modal Interface Automata) theory

    OSHDB: a framework for spatio-temporal analysis of OpenStreetMap history data

    Get PDF
    Abstract OpenStreetMap (OSM) is a collaborative project collecting geographical data of the entire world. The level of detail of OSM data and its data quality vary much across different regions and domains. In order to analyse such variations it is often necessary to research the history and evolution of the OSM data. The OpenStreetMap History Database (OSHDB) is a new data analysis tool for spatio-temporal geographical vector data. It is specifically optimized for working with OSM history data on a global scale and allows one to investigate the data evolution and user contributions in a flexible way. Benefits of the OSHDB are for example: to facilitate accessing OSM history data as a research subject and to assess the quality of OSM data by using intrinsic measures. This article describes the requirements of such a system and the resulting technical implementation of the OSHDB: the OSHDB data model and its application programming interface

    A generalised theory of Interface Automata, component compatibility and error

    No full text
    Interface theories allow system designers to reason about the composability and compatibility of concurrent system components. Such theories often extend both de Alfaro and Henzinger's Interface Automata and Larsen's Modal Transition Systems, which leads, however, to several issues that are undesirable in practice: an unintuitive treatment of specified unwanted behaviour, a binary compatibility concept that does not scale to multi-component assemblies, and compatibility guarantees that are insufficient for software product lines. In this article we show that communication mismatches are central to all these problems and, thus, the ability to represent such errors semantically is an important feature of an interface theory. Accordingly, we present the error-aware interface theory EMIA, where the above shortcomings are remedied by introducing explicit fatal error states. In addition, we prove via a Galois insertion that EMIA is a conservative generalisation of the established Modal Interface Automata theory

    Regional variations of context‐based association rules in OpenStreetMap

    No full text
    As a user‐generated map of the whole world, OpenStreetMap (OSM) provides valuable information about the natural and built environment. However, the spatial heterogeneity of the data due to cultural differences and the spatially varying mapping process makes the extraction of reliable information difficult. This study investigates the variability of association rules extracted from OSM across different geographic regions and depending on different context variables, such as the number of OSM mappers. The focus of this study is the spatial co‐occurrence of OSM tags mapped inside of parks within eight different cities. Without considering any context variable, most association rules were very region‐specific without any rule being valid across all cities. Limiting the association rule analysis to parks based on specific context variables increased the number of rules which are applicable across multiple cities. Furthermore, additional region‐specific association rules emerged. The most important context variables were found to be the number of features mapped inside the park, the number of tags and the park size. These results suggest that the mapping process has a significant influence on the emergence of association rules within user‐generated data. Therefore, this subject needs further investigation to enable effective usage of OSM data across different cultural realms

    Richer interface automata with optimistic and pessimistic compatibility

    No full text
    Modal transition systems are a popular semantic underpinning of interface theories, such as Nyman et al.’s IOMTS and Bauer et al.’s MIO, which facilitate component-based reasoning for concurrent systems. Our interface theory MIA repaired a compositional flaw of IOMTS-refinement and introduced a conjunction operator. In this paper, we first modify MIA to properly deal with internal computations including internal must-transitions, which were largely ignored already in IOMTS. We then study a MIA variant that adopts MIO’s pessimistic—rather than IOMTS’ optimistic—view on component compatibility and define, for the first-time in a pessimistic, non-deterministic setting, conjunction and disjunction on interfaces. For both the optimistic and pessimistic MIA variant, we also discuss mechanisms for extending alphabets when refining interfaces, which is a desired feature for perspective-based specification. We illustrate our advancements via a small example

    Mapping Human Settlements with Higher Accuracy and Less Volunteer Efforts by Combining Crowdsourcing and Deep Learning

    No full text
    Reliable techniques to generate accurate data sets of human built-up areas at national, regional, and global scales are a key factor to monitor the implementation progress of the Sustainable Development Goals as defined by the United Nations. However, the scarce availability of accurate and up-to-date human settlement data remains a major challenge, e.g., for humanitarian organizations. In this paper, we investigated the complementary value of crowdsourcing and deep learning to fill the data gaps of existing earth observation-based (EO) products. To this end, we propose a novel workflow to combine deep learning (DeepVGI) and crowdsourcing (MapSwipe). Our strategy for allocating classification tasks to deep learning or crowdsourcing is based on confidence of the derived binary classification. We conducted case studies in three different sites located in Guatemala, Laos, and Malawi to evaluate the proposed workflow. Our study reveals that crowdsourcing and deep learning outperform existing EO-based approaches and products such as the Global Urban Footprint. Compared to a crowdsourcing-only approach, the combination increased the quality (measured by Matthew’s correlation coefficient) of the generated human settlement maps by 3 to 5 percentage points. At the same time, it reduced the volunteer efforts needed by at least 80 percentage points for all study sites. The study suggests that for the efficient creation of human settlement maps, we should rely on human skills when needed and rely on automated approaches when possible
    corecore