7 research outputs found

    A Generalised Theory of Interface Automata, Component Compatibility and Error

    Get PDF
    Interface theories allow systems designers to reason about the composability and compatibility of concurrent system components. Such theories often extend both de Alfaro and Henzinger’s Interface Automata and Larsen’s Modal Transition Systems, which leads, however, to several issues that are undesirable in practice: an unintuitive treatment of specified unwanted behaviour, a binary compatibility concept that does not scale to multi-component assemblies, and compatibility guarantees that are insufficient for software product lines. In this paper we show that communication mismatches are central to all these problems and, thus, the ability to represent such errors semantically is an important feature of an interface theory. Accordingly, we present the error-aware interface theory EMIA, where the above shortcomings are remedied by introducing explicit fatal error states. In addition, we prove via a Galois insertion that EMIA is a conservative generalisation of the established MIA (Modal Interface Automata) theory

    The COCOMO-Models in the Light of the Agile Software Development

    Get PDF
    Aufwandsschätzungen sind wichtig, um ökonomische und strategische Entscheidungen in der Softwareentwicklung treffen zu können. Verschiedene Veröffentlichungen propagieren das Constructive Cost Model (COCOMO) als ein algorithmisches Kostenmodell, basierend auf Formeln mit objektiven Variablen für Schätzungen in der klassischen Softwareentwicklung (KS). Arbeiten aus der agilen Softwareentwicklung (AS) verweisen auf den Einsatz von erfahrungsbasierten Schätzmethoden und von subjektiven Variablen. Aufgrund der schwachen Operationalisierung im agilen Kontext sind Aussagen über konkrete Ursache- und Wirkungszusammenhänge schwer zu treffen. Hinzu kommt der einseitige Fokus der klassischen und agilen Untersuchungen auf den eigene Forschungsbereich, der nach sich zieht, dass eine Verwendung von Variablen aus COCOMO in der AS unklar ist. Wenn hierzu Details bekannt wären, könnten operationalisierte Variablen aus COCOMO auch in der AS eingesetzt werden. Dadurch wird es möglich, in einer wissenschaftlichen Untersuchung eine Konzeptionierung von konkreten kausalen Abhängigkeiten vorzunehmen – diese Erkenntnisse würden wiederum eine Optimierung des Entwicklungsprozesses erlauben. Zur Identifikation von Variablen wird dazu eine qualitative und deskriptive Arbeit mit einer Literaturrecherche und einer Auswertung der Quellen durchgeführt. Erste Ergebnisse zwischen beiden Welten zeigen dabei sowohl Unterschiede als auch Gemeinsamkeiten. Eine Vielzahl von Variablen aus COCOMO kann in der AS verwendet werden. Inwieweit dies möglich ist, ist von den objektiven und subjektiven Anteilen der Variablen abhängig. Vertreter mit erfahrungsbasiertem Hintergrund wie Analyst Capability (ACAP) und Programmer Capability (PCAP) lassen sich aufgrund von Übereinstimmungen mit personenbezogenen Merkmalen gut in die AS übertragen. Parallel dazu sind Variablen aus dem Prozess- und Werkzeugumfeld weniger gut transferierbar, da konkret die AS einen Fokus auf solche Projektmerkmale ablehnt. Eine Weiterverwendung von Variablen ist damit grundsätzlich unter der Berücksichtigung von gegebenen Rahmenbedingungen möglich.Effort estimations are important in order to make economic and strategic decisions in software development. Various publications propagate the Constructive Cost Model (COCOMO) as an algorithmic cost model, based on formulas with objective variables for estimations in classical software development (KS). Papers from agile software development (AS) refers to the use of experience-based estimation methods and subjective variables. Due to the weak operationalization in an agile context, statements about concrete cause and effect relationships are difficult to make. In addition, there is the one-sided focus of classical and agile investigations on their own research field, which suggests that the use of variables from COCOMO in the AS is unclear. If details were available, operational variables from COCOMO could also be used in the AS. This makes it possible to carry out a conceptualization of concrete causal dependencies in a scientific investigation - these findings in turn would allow an optimization of the development process. To identify variables, a qualitative and descriptive work with a literature research and an evaluation of the sources is carried out. First results between the two worlds show both differences and similarities. A large number of variables from COCOMO can be used in the AS. This is possible depending on the objective and subjective proportions of the variables. Variables with an experience-based background, such as Analyst Capability (ACAP) and Programmer Capability (PCAP), can be well transferred to the AS by matching personal characteristics. At the same time, variables from the process and tool environment are less easily transferable, because AS specifically rejects a focus on such project features. A re-use of variables is thus possible under consideration of given conditions

    Static Analysis Rules of the BPEL Specification: Tagging, Formalization and Tests

    Get PDF
    In 2007, OASIS finalized their Business Process Execution Language 2.0 (BPEL) specification which defines an XML-based language for orchestrations of Web Services. As the validation of BPEL processes against the official BPEL XML schema leaves room for a plethora of static errors, the specification contains 94 static analysis rules to cover all static errors. According to the specification, any violations of these rules are to be checked by a standard conformant engine at deployment time. When a violation is not detected in BPEL processes during deployment, such errors are only detectable at runtime, making them expensive to find and fix. Due to the large amount of rules, we have created a tag system to categorize them, allowing easier reasoning about these rules. Next, we formalized the static rules and derived test cases based on these formalizations with the aim to evaluate the degree of support for static analysis of BPEL engines. Hence, this work is the foundation of the static analysis capabilities of BPEL engines

    A generalised theory of Interface Automata, component compatibility and error

    No full text
    Interface theories allow system designers to reason about the composability and compatibility of concurrent system components. Such theories often extend both de Alfaro and Henzinger's Interface Automata and Larsen's Modal Transition Systems, which leads, however, to several issues that are undesirable in practice: an unintuitive treatment of specified unwanted behaviour, a binary compatibility concept that does not scale to multi-component assemblies, and compatibility guarantees that are insufficient for software product lines. In this article we show that communication mismatches are central to all these problems and, thus, the ability to represent such errors semantically is an important feature of an interface theory. Accordingly, we present the error-aware interface theory EMIA, where the above shortcomings are remedied by introducing explicit fatal error states. In addition, we prove via a Galois insertion that EMIA is a conservative generalisation of the established Modal Interface Automata theory

    Optimized Buffering of Time-Triggered Automotive Software

    Get PDF
    The development of an automotive system involves the integration of many real-time software functionalities, and it is of utmost importance to guarantee strict timing requirements. However, the recent trend towards multi-core architectures poses significant challenges for the timely transfer of signals between processor cores so as to not violate data consistency. We have studied and adapted an existing buffering mechanism to work specifically for statically scheduled time-triggered systems, called static buffering protocol. We developed further buffering optimisation algorithms and heuristics, to reduce the memory consumption, processor utilisation, and end-to-end response times of time-triggered AUTOSAR designs on multi-core platforms. Our contributions are important because they enable deterministic time-triggered implementations to become competitive alternatives to their inherently non-deterministic event-triggered counterparts. We have prototyped a selection of optimisations in an industrial tool and evaluated them on realistic industrial automotive benchmarks

    Nucleus - Unified Deployment and Management for Platform as a Service

    Get PDF
    Cloud computing promises several advantages over classic IT models and has undoubtedly been one of the most hyped topics in the industry over the last couple of years. Besides the established delivery models Infrastructure as a Service (IaaS) and Software as a Service (SaaS), especially Platform as a Service (PaaS) has attracted significant attention these days. PaaS facilitates the hosting of scalable applications in the cloud by providing managed and highly automated application environments. Although most offerings are conceptually comparable to each other, the interfaces for application deployment and management vary greatly between vendors. Despite providing similar functionalities, technically different workflows and commands provoke vendor lock-in and hinder portability as well as interoperability. In this study, we present the tool Nucleus, which realizes a unified interface for application deployment and management among cloud platforms. With its help, we aim to increase the portability of PaaS applications and thus help to avoid critical vendor lock-in effects

    Modal Interface Theories for Specifying Component-based Systems

    Get PDF
    Large software systems frequently manifest as complex, concurrent, reactive systems and their correctness is often crucial for the safety of the application. Hence, modern techniques of software engineering employ incremental, component-based approaches to systems design. These are supported by interface theories which may serve as specification languages and as semantic foundations for software product lines, web-services, the internet of things, software contracts and conformance testing. Interface theories enable a systems designer to express communication requirements of components on their environments and to reason about the mutual compatibility of these requirements in order to guarantee the communication safety of the system. Further, interface theories enrich traditional operational specification theories by declarative aspects such as conjunction and disjunction, which allow one to specify systems heterogeneously. However, substantial practical aspects of software verification are not supported by current interface theories, e.g., reusing components, adapting components to changed operational environments, reasoning about the compatibility of more than two components, modelling software product lines or tracking erroneous behaviour in safety-critical systems. The goal of this thesis is to investigate the theoretical foundations for making interface theories more practical by solving the above issues. Although partial solutions to some of these issues have been presented in the literature, none of them succeeds without sacrificing other desired features. The particular challenge of this thesis is to solve these problems simultaneously within a single interface theory. To this end, the arguably most general interface theory Modal Interface Automata (MIA) is extended, yielding the interface theory Error-preserving Modal Interface Automata (EMIA). The above problems are addressed as follows. Quotient operators are adjoint to composition and, therefore, support component reuse. Such a quotient operator is introduced to both MIA and EMIA. It is the first one that considers nondeterministic dividends and compatibility. Alphabet extension operators for MIA and EMIA allow for the change of operational environment by permitting one to adapt system components to new interactions without breaking previously satisfied requirements. Erroneous behavior is identified as a common source of problems with respect to the compatibility of more than two components, the modelling of software product lines and erroneous behaviour in safety-critical systems. EMIA improves on previous interface theories by providing a more precise semantics with respect to erroneous behaviour based on error-preservation. The relation between error-preservation and the usual error-abstraction employed in previous interface theories is investigated, establishing a Galois insertion from MIA into EMIA that is relevant at the levels of specifications, composition operations and proofs. The practical utility of interface theories is demonstrated by providing a software implementation of MIA and EMIA that is applied to two case studies. Further, an outlook is given on the relation between type checking and refinement checking. As a proof of concept, the simple interface theory Interface Automata is extended to a behavioural type theory where type checking is a syntactic approximation of refinement checking.Große Softwaresysteme bilden häufig komplexe, nebenläufige, reaktive Systeme, deren Korrektheit für die Sicherheit der Anwendung entscheidend ist. Daher setzen moderne Verfahren der Softwaretechnik inkrementelle, komponentenbasierte Ansätze zum Software-Entwurf ein. Diese werden von Interface-Theorien unterstützt, die als Spezifikationssprachen und semantische Grundlagen für Softwareproduktlinien, Web-Services, das Internet der Dinge, Softwarekontrakte und Konformanztests dienen können. Interface-Theorien ermöglichen es, Kommunikationsanforderungen von Komponenten an ihre Umgebung auszudrücken, um die gegenseitige Kompatibilität dieser Anforderungen zu überprüfen und die Kommunikationssicherheit des Systems zu garantieren. Zudem erweitern Interface-Theorien traditionelle operationale Spezifikationstheorien um deklarative Aspekte wie beispielsweise Konjunktion und Disjunktion, die heterogenes Spezifizieren ermöglichen. Allerdings werden wesentliche praktische Aspekte der Softwareverifikation von Interface-Theorien nicht unterstützt, z.B. das Wiederverwenden von Komponenten, das Anpassen von Komponenten an geänderte operationale Umgebungen, die Kompatibilitätsprüfung von mehr als zwei Komponenten, das Modellieren von Softwareproduktlinien oder das Zurückverfolgen von Fehlverhalten sicherheitskritischer Systeme. Diese Arbeit untersucht die theoretischen Grundlagen von Interface-Theorien mit dem Ziel, die oben genannten praktischen Probleme zu lösen. Obwohl es in der Literatur Teillösungen zu manchen dieser Probleme gibt, erreicht keine davon ihr Ziel, ohne andere wünschenswerte Eigenschaften aufzugeben. Die besondere Herausforderung dieser Arbeit besteht darin, diese Probleme innerhalb einer einzigen Interface-Theorie zugleich zu lösen. Zu diesem Zweck wurde die wohl allgemeinste Interface-Theorie Modal Interface Automata (MIA) zu der Interface-Theorie Error-preserving Modal Interface Automata (EMIA) weiterentwickelt. Die obigen Probleme werden wie folgt gelöst. Ein zur Komposition adjungierter Quotientenoperator, der das Wiederverwenden von Komponenten ermöglicht, wurde für MIA und EMIA eingeführt. Es handelt sich dabei um den ersten Quotientenoperator, der nichtdeterministische Dividenden und Kompatibilität betrachtet. Alphabeterweiterungsoperatoren erlauben eine Änderung der operationalen Umgebung, indem sie es ermöglichen, Komponenten an neue Interaktionen anzupassen, ohne zuvor erfüllte Anforderungen zu missachten. Fehlerhaftes Verhalten wird als eine gemeinsame Ursache von Problemen bezüglich der Kompatibilität von mehr als zwei Komponenten, der Modellierung von Softwareproduktlinien und des Fehlverhaltens sicherheitskritischer Systeme erkannt. EMIA verbessert bisherige Interface-Theorien durch eine präzisere Fehlersemantik, die auf dem Erhalten von Fehlern beruht. Als Beziehung zwischen diesem Fehlererhalt und der in bisherigen Interface-Theorien üblichen Fehlerabstraktion ergibt sich eine Galois-Einbettung von MIA in EMIA, die auf den Ebenen der Spezifikationen, Operatoren und Beweise relevant ist. Die praktische Anwendbarkeit von Interface-Theorien wird mittels einer Implementierung von MIA und EMIA als Software und deren Anwendung auf zwei Fallstudien demonstriert. Zudem wird das Verhältnis zwischen Verfeinerung und Typprüfung diskutiert. In einer Machbarkeitsstudie wurde die einfache Interface-Theorie Interface Automata zu einer Verhaltenstyptheorie erweitert, bei der die Typprüfung eine syntaktische Approximation der Verfeinerung ist
    corecore