14 research outputs found
Domain Globalization: Using Languages to Support Technical and Social Coordination
International audienceWhen a project is realized in a globalized environment, multiple stakeholders from different organizations work on the same system. Depending on the stakeholders and their organizations, various (possibly overlapping) concerns are raised in the development of the system. In this context a Domain Specific Language (DSL) supports the work of a group of stakeholders who are responsible for addressing a specific set of concerns. This chapter identifies the open challenges arising from the coordination of globalized domain-specific languages. We identify two types of coordination: technical coordination and social coordination. After presenting an overview of the current state of the art, we discuss first the open challenges arising from the composition of multiple DSLs, and then the open challenges associated to the collaboration in a globalized environment
Contributions to multi-view modeling and the multi-view consistency problem for infinitary languages and discrete systems
The modeling of most large and complex systems, such as embedded, cyber-physical, or distributed systems, necessarily involves many designers. The multiple stakeholders carry their own perspectives of the system under development in order to meet a variety of objectives, and hence they derive their own models for the same system. This practice is known as multiview modeling, where the distinct models of a system are called views. Inevitably, the separate views are related, and possible overlaps may give rise to inconsistencies. Checking for multiview consistency is key to multi-view modeling approaches, especially when a global model for the system is absent, and can only be synthesized from the views.
The present thesis provides an overview of the representative related work in multi-view modeling, and contributes to the formal study of multi-view modeling and the multi-view consistency problem for views and systems described as sets of behaviors. In particular, two distinct settings are investigated, namely, infinitary languages, and discrete systems. In the former research, a system and its views are described by mixed automata, which accept both finite and infinite words, and the corresponding infinitary languages. The views are obtained from the system by projections of an alphabet of events (system domain) onto a subalphabet (view domain), while inverse projections are used in the other direction. A systematic study is provided for mixed automata, and their languages are proved to be closed under union, intersection, complementation, projection, and inverse projection. In the sequel, these results are used in order to solve the multi-view consistency problem in the infinitary language setting.
The second research introduces the notion of periodic sampling abstraction functions, and investigates the multi-view consistency problem for symbolic discrete systems with respect to these functions. Apart from periodic samplings, inverse periodic samplings are also introduced, and the closure of discrete systems under these operations is investigated. Then, three variations of the multi-view consistency problem are considered, and their relations are discussed. Moreover, an algorithm is provided for detecting view inconsistencies. The algorithm is sound but it may fail to detect all inconsistencies, as it relies on a state-based reachability, and inconsistencies may also involve the transition structure of the system
Engineering Resilient Collective Adaptive Systems by Self-Stabilisation
Collective adaptive systems are an emerging class of networked computational
systems, particularly suited in application domains such as smart cities,
complex sensor networks, and the Internet of Things. These systems tend to
feature large scale, heterogeneity of communication model (including
opportunistic peer-to-peer wireless interaction), and require inherent
self-adaptiveness properties to address unforeseen changes in operating
conditions. In this context, it is extremely difficult (if not seemingly
intractable) to engineer reusable pieces of distributed behaviour so as to make
them provably correct and smoothly composable.
Building on the field calculus, a computational model (and associated
toolchain) capturing the notion of aggregate network-level computation, we
address this problem with an engineering methodology coupling formal theory and
computer simulation. On the one hand, functional properties are addressed by
identifying the largest-to-date field calculus fragment generating
self-stabilising behaviour, guaranteed to eventually attain a correct and
stable final state despite any transient perturbation in state or topology, and
including highly reusable building blocks for information spreading,
aggregation, and time evolution. On the other hand, dynamical properties are
addressed by simulation, empirically evaluating the different performances that
can be obtained by switching between implementations of building blocks with
provably equivalent functional properties. Overall, our methodology sheds light
on how to identify core building blocks of collective behaviour, and how to
select implementations that improve system performance while leaving overall
system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio
Design Space Exploration and Resource Management of Multi/Many-Core Systems
The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends
Dynamic analysis of Cyber-Physical Systems
With the recent advances in communication and computation technologies, integration of software into the sensing, actuation, and control is common. This has lead to a new branch of study called Cyber-Physical Systems (CPS). Avionics, automotives, power grid, medical devices, and robotics are a few examples of such systems. As these systems are part of critical infrastructure, it is very important to ensure that these systems function reliably without any failures. While testing improves confidence in these systems, it does not establish the absence of scenarios where the system fails. The focus of this thesis is on formal verification techniques for cyber-physical systems that prove the absence of errors in a given system. In particular, this thesis focuses on {\em dynamic analysis} techniques that bridge the gap between testing and verification.
This thesis uses the framework of hybrid input output automata for modeling CPS. Formal verification of hybrid automata is undecidable in general.
Because of the undecidability result, no algorithm is guaranteed to terminate for all models.
This thesis focuses on developing heuristics for verification that exploit sample executions of the system. Moreover, the goal of the dynamic analysis techniques proposed in this thesis is to ensure that the techniques are sound, i.e., they always return the right answer, and they are relatively complete, i.e., the techniques terminate when the system satisfies certain special conditions. For undecidable problems, such theoretical guarantees are the strongest that can be expected out of any automatic procedure. This thesis focuses on safety properties, which require that nothing bad happens. In particular we consider invariant and temporal precedence properties; temporal precedence properties ensure that the temporal ordering of certain events in every execution satisfy a given specification.
This thesis introduces the notion of a discrepancy function that aids in dynamic analysis of CPS. Informally, these discrepancy functions capture the convergence or divergence of continuous behaviors in CPS systems. In control theory, several proof certificates such as contraction metric and incremental stability have been proposed to capture the convergence and divergence of solutions of ordinary differential equations. This thesis establishes that discrepancy functions generalize such proof certificates. Further, this thesis also proposes a new technique to compute discrepancy functions for continuous systems with linear ODEs from sample executions.
One of the main contributions of this thesis is a technique to compute an over-approximation of the set of reachable states using sample executions and discrepancy functions. Using the reachability computation technique, this thesis proposes a safety verification algorithm which is proved to be sound and relatively complete. This technique is implemented in a tool called, Compare-Execute-Check-Engine (C2E2) and experimental results show that it is scalable.
To demonstrate the applicability of the algorithms presented, two challenging case studies are analyzed as a part of this thesis. The first case study is about an alerting mechanism in parallel aircraft landing. For performing this case study, the dynamic analysis presented for invariant verification is extended to handle temporal properties. The second case study is about verifying key specification of powertrain control system. New algorithms for computing discrepancy function were implemented in C2E2 for performing this case study. Both these case studies demonstrate that dynamic analysis technique gives promising results and can be applied to realistic CPS.
For distributed CPS implementations, where message passing, and clocks skews between agents make formal verification difficult to scale, this thesis presents a dynamic analysis algorithm for inferring global predicates. Such global predicates include assertions about the physical state and the software state of all the agents involved in distributed CPS. This algorithm is applied to coordinated robotic maneuvers for inferring safety and detecting deadlock
RitHM: A Modular Software Framework for Runtime Monitoring Supporting Complete and Lossy Traces
Runtime verification (RV) is an effective and automated method for specification based offline testing as well as online monitoring of complex real-world systems. Firstly, a software framework for RV needs to exhibit certain design features to support usability, modifiability and efficiency. While usability and modifiability are important for providing support for expressive logical formalisms, efficiency is required to reduce the extra overhead at run time. Secondly, most existing techniques assume the existence of a complete execution trace for RV. However, real-world systems often produce incomplete execution traces due to reasons such as network issues, logging failures, etc. A few verification techniques have recently emerged for performing verification of incomplete execution traces. While some of these techniques sacrifice soundness, others are too restrictive in their tolerance for incompleteness.
For addressing the first problem, we introduce RitHM, a comprehensive framework, which enables development and integration of efficient verification techniques. RitHM's design takes into account various state-of-the-art techniques that are developed to optimize RV w.r.t. the efficiency of monitors and expressivity of logical formalisms. RitHM's design supports modifiability by allowing a reuse of efficient monitoring algorithms in the form of plugins, which can utilize heterogeneous back-ends. RitHM also supports extensions of logical formalisms through logic plugins. It also facilitates the interoperability between implementations of monitoring algorithms, and this feature allows utilizing different efficient algorithms for monitoring different sub-parts of a specification.
We evaluate RitHM's architecture and architectures of a few more tools using architecture trade-off analysis (ATAM) method. We also report empirical results, where RitHM is used for monitoring real-world systems. The results underscore the importance of various design features of RitHM.
For addressing the second problem, we identify a fragment of LTL specifications, which can be soundly monitored in the presence of transient loss events in an execution trace. We present an offline algorithm, which identifies whether an LTL formula is monitorable in a presence of a transient loss of events and constructs a loss-tolerant monitor depending upon the monitorability of the formula.
Our experimental results demonstrate that our method increases the applicability of RV for monitoring various real-world applications, which produce lossy traces. The extra overhead caused by our constructed monitors is minimal as demonstrated by application of our method on commonly used patterns of LTL formulas
Time for Reactive System Modeling
Reactive systems interact with their environment by reading inputs and computing and feeding back outputs in reactive cycles that are also called ticks. Often they are safety critical systems and are increasingly modeled with highlevel modeling tools. The concepts of the corresponding modeling languages are typically aimed to facilitate formal reasoning about program constructiveness to guarantee deterministic output and are explicitly abstracted from execution time aspects. Nevertheless, the worst-case execution time of a tick can be a crucial value, where exceedance can lead to lost inputs or tardy reaction to critical events. This thesis proposes a general approach to interactive timing analysis, which enables the feedback of detailed timing values directly in the model representation to support timing aware modeling. The concept is based on a generic timing interface that enables the exchangeability of the modeling as well as the timing analysis tool for the flexible implementation of varying tool chains. The proposed timing analysis approach includes visual highlighting and modeling pragmatics features to guide the user to timing hotspots for timing related model revisions
OSS architecture for mixed-criticality systems – a dual view from a software and system engineering perspective
Computer-based automation in industrial appliances led to a growing number of
logically dependent, but physically separated embedded control units per
appliance. Many of those components are safety-critical systems, and require
adherence to safety standards, which is inconsonant with the relentless demand
for features in those appliances. Features lead to a growing amount of control
units per appliance, and to a increasing complexity of the overall software
stack, being unfavourable for safety certifications. Modern CPUs provide means
to revise traditional separation of concerns design primitives: the consolidation
of systems, which yields new engineering challenges that concern the entire
software and system stack.
Multi-core CPUs favour economic consolidation of formerly separated
systems with one efficient single hardware unit. Nonetheless, the system
architecture must provide means to guarantee the freedom from interference
between domains of different criticality. System consolidation demands for
architectural and engineering strategies to fulfil requirements (e.g., real-time
or certifiability criteria) in safety-critical environments.
In parallel, there is an ongoing trend to substitute ordinary proprietary base
platform software components by mature OSS variants for economic and
engineering reasons. There are fundamental differences of processual properties
in development processes of OSS and proprietary software. OSS in
safety-critical systems requires development process assessment techniques to
build an evidence-based fundament for certification efforts that is based upon
empirical software engineering methods.
In this thesis, I will approach from both sides: the software and system
engineering perspective. In the first part of this thesis, I focus on the
assessment of OSS components: I develop software engineering techniques
that allow to quantify characteristics of distributed OSS development
processes. I show that ex-post analyses of software development processes can
be used to serve as a foundation for certification efforts, as it is required
for safety-critical systems.
In the second part of this thesis, I present a system architecture based on
OSS components that allows for consolidation of mixed-criticality systems
on a single platform. Therefore, I exploit virtualisation extensions of modern
CPUs to strictly isolate domains of different criticality. The proposed
architecture shall eradicate any remaining hypervisor activity in order to
preserve real-time capabilities of the hardware by design, while
guaranteeing strict isolation across domains.ComputergestĂĽtzte Automatisierung industrieller Systeme fĂĽhrt zu einer
wachsenden Anzahl an logisch abhängigen, aber physisch voneinander getrennten
Steuergeräten pro System. Viele der Einzelgeräte sind sicherheitskritische
Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch
die unermüdliche Nachfrage an Funktionalitäten erschwert wird. Diese führt zu
einer wachsenden Gesamtzahl an Steuergeräten, einhergehend mit wachsender
Komplexität des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben
erschwert werden. Moderne Prozessoren stellen Mittel zur VerfĂĽgung, welche es
ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu
erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische
Herausforderungen, die den gesamten Software und Systemstapel betreffen.
Mehrkernprozessoren begünstigen die ökonomische und effiziente Konsolidierung
vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete
Systemarchitekturen müssen jedoch die Rückwirkungsfreiheit zwischen Domänen
unterschiedlicher Kritikalität sicherstellen. Die Konsolidierung erfordert
architektonische, als auch ingenieurstechnische Strategien um die Anforderungen
(etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen
Umgebungen erfüllen zu können.
Zunehmend werden herkömmliche proprietär entwickelte Basisplattformkomponenten
aus ökonomischen und technischen Gründen vermehrt durch ausgereifte OSS
Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen
Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in
sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben
die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament fĂĽr
Zertifizierungsvorhaben basierend auf empirischen Methoden des Software
Engineerings zur VerfĂĽgung zu stellen.
In dieser Arbeit nähere ich mich von beiden Seiten: der Softwaretechnik, und
der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von
OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es
ermöglichen, prozessuale Charakteristika von verteilten OSS
Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rĂĽckschauende Analysen
des Entwicklungsprozess als Grundlage fĂĽr Softwarezertifizierungsvorhaben
genutzt werden können.
Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle
eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von
Systemen gemischter Kritikalität auf einer alleinstehenden Plattform
ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren
aus, um die Hardware in strikt voneinander isolierten Rechendomänen unterschiedlicher
Kritikalität unterteilen zu können. Die vorgeschlagene Architektur soll jegliche
Betriebsstörungen des Hypervisors beseitigen, um die Echtzeitfähigkeiten der
Hardware bauartbedingt aufrecht zu erhalten, während strikte Isolierung
zwischen Domänen stets sicher gestellt ist
Fundamental Approaches to Software Engineering
This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution
Design and implementation of WCET analyses : including a case study on multi-core processors with shared buses
For safety-critical real-time embedded systems, the worst-case execution time (WCET) analysis — determining an upper bound on the possible execution times of a program — is an important part of the system verification. Multi-core processors share resources (e.g. buses and caches) between multiple processor cores and, thus, complicate the WCET analysis as the execution times of a program executed on one processor core significantly depend on the programs executed in parallel on the concurrent cores. We refer to this phenomenon as shared-resource interference. This thesis proposes a novel way of modeling shared-resource interference during WCET analysis. It enables an efficient analysis — as it only considers one processor core at a time — and it is sound for hardware platforms exhibiting timing anomalies. Moreover, this thesis demonstrates how to realize a timing-compositional verification on top of the proposed modeling scheme. In this way, this thesis closes the gap between modern hardware platforms, which exhibit timing anomalies, and existing schedulability analyses, which rely on timing compositionality. In addition, this thesis proposes a novel method for calculating an upper bound on the amount of interference that a given processor core can generate in any time interval of at most a given length. Our experiments demonstrate that the novel method is more precise than existing methods.Die Analyse der maximalen Ausführungszeit (Worst-Case-Execution-Time-Analyse, WCET-Analyse) ist für eingebettete Echtzeit-Computer-Systeme in sicherheitskritischen Anwendungsbereichen unerlässlich. Mehrkernprozessoren erschweren die WCET-Analyse, da einige ihrer Hardware-Komponenten von mehreren Prozessorkernen gemeinsam genutzt werden und die Ausführungszeit eines Programmes somit vom Verhalten mehrerer Kerne abhängt. Wir bezeichnen dies als Interferenz durch gemeinsam genutzte Komponenten. Die vorliegende Arbeit schlägt eine neuartige Modellierung dieser Interferenz während der WCET-Analyse vor. Der vorgestellte Ansatz ist effizient und führt auch für Computer-Systeme mit Zeitanomalien zu korrekten Ergebnissen. Darüber hinaus zeigt diese Arbeit, wie ein zeitkompositionales Verfahren auf Basis der vorgestellten Modellierung umgesetzt werden kann. Auf diese Weise schließt diese Arbeit die Lücke zwischen modernen Mikroarchitekturen, die Zeitanomalien aufweisen, und den existierenden Planbarkeitsanalysen, die sich alle auf die Kompositionalität des Zeitverhaltens verlassen. Außerdem stellt die vorliegende Arbeit ein neues Verfahren zur Berechnung einer oberen Schranke der Menge an Interferenz vor, die ein bestimmter Prozessorkern in einem beliebigen Zeitintervall einer gegebenen Länge höchstens erzeugen kann. Unsere Experimente zeigen, dass das vorgestellte Berechnungsverfahren präziser ist als die existierenden Verfahren.Deutsche Forschungsgemeinschaft (DFG) as part of the Transregional Collaborative Research Centre SFB/TR 14 (AVACS