54 research outputs found
Efficiency Improvements in the Quality Assurance Process for Data Races
As the usage of concurrency in software has gained importance in the last years, and is still rising, new types of defects increasingly appeared in software. One of the most prominent and critical types of such new defect types are data races. Although research resulted in an increased effectiveness of dynamic quality assurance regarding data races, the efficiency in the quality assurance process still is a factor preventing widespread practical application. First, dynamic quality assurance techniques used for the detection of data races are inefficient. Too much effort is needed for conducting dynamic quality assurance. Second, dynamic quality assurance techniques used for the analysis of reported data races are inefficient. Too much effort is needed for analyzing reported data races and identifying issues in the source code.
The goal of this thesis is to enable efficiency improvements in the process of quality assurance for data races by: (1) analyzing the representation of the dynamic behavior of a system under test. The results are used to focus instrumentation of this system, resulting in a lower runtime overhead during test execution compared to a full instrumentation of this system. (2) Analyzing characteristics and preprocessing of reported data races. The results of the preprocessing are then provided to developers and quality assurance personnel, enabling an analysis and debugging process, which is more efficient than traditional analysis of data race reports. Besides dynamic data race detection, which is complemented by the solution, all steps in the process of dynamic quality assurance for data races are discussed in this thesis.
The solution for analyzing UML Activities for nodes possibly executing in parallel to other nodes or themselves is based on a formal foundation using graph theory. A major problem that has been solved in this thesis was the handling of cycles within UML Activities. This thesis provides a dynamic limit for the number of cycle traversals, based on the elements of each UML Activity to be analyzed and their semantics. Formal proofs are provided with regard to the creation of directed acyclic graphs and with regard to their analysis concerning the identification of elements that may be executed in parallel to other elements. Based on an examination of the characteristics of data races and data race reports, the results of dynamic data race detection are preprocessed and the outcome of this preprocessing is presented to users for further analysis.
This thesis further provides an exemplary application of the solution idea, of the results of analyzing UML Activities, and an exemplary examination of the efficiency improvement of the dynamic data race detection, which showed a reduction in the runtime overhead of 44% when using the focused instrumentation compared to full instrumentation. Finally, a controlled experiment has been set up and conducted to examine the effects of the preprocessing of reported data races on the efficiency of analyzing data race reports. The results show that the solution presented in this thesis enables efficiency improvements in the analysis of data race reports between 190% and 660% compared to using traditional approaches.
Finally, opportunities for future work are shown, which may enable a broader usage of the results of this thesis and further improvements in the efficiency of quality assurance for data races.Da die Verwendung von Concurrency in Software in den letzten Jahren an Bedeutung gewonnen hat, und immer noch gewinnt, sind zunehmend neue Arten von Fehlern in Software aufgetaucht. Eine der prominentesten und kritischsten Arten solcher neuer Fehlertypen sind data races. Auch wenn die Forschung zu einer steigenden Effektivität von Verfahren der dynamischen Qualitätssicherung geführt hat, so ist die Effizienz im Prozess der Qualitätssicherung noch immer ein Faktor, der eine weitverbreitete praktische Anwendung verhindert. Zum einen wird zu viel Aufwand benötigt, um dynamische Qualitätssicherung durchzuführen. Zum anderen sind die Verfahren zur Analyse gemeldeter data races ineffizient; es wird zu viel Aufwand benötigt, um gemeldete data races zu analysieren und Probleme im Quellcode zu identifizieren.
Das Ziel dieser Dissertation ist es, Effizienzsteigerungen im Qualitätssicherungsprozess für data races zu ermöglichen, durch: (1) Analyse der Repräsentation des dynamischen Verhaltens des zu testenden Systems. Mit den Ergebnissen wird die Instrumentierung dieses Systems fokussiert, so dass ein im Vergleich zur vollen Instrumentierung des Systems geringerer Mehraufwand an Laufzeit benötigt wird. (2) Analyse der Charakteristiken von und Vorverarbeitung der gemeldeten data races. Die Ergebnisse der Vorverarbeitung werden Mitarbeitenden in der Entwicklung und Qualitätssicherung präsentiert, so dass ein Analyse- und Fehlerbehebungsprozess ermöglicht wird, welcher effizienter als traditionelle Analysen gemeldeter data races ist. Mit Ausnahme der dynamischen data race Erkennung, welche durch die Lösung komplementiert wird, werden alle Schritte im Prozess der dynamischen Qualitätssicherung für data races in dieser Dissertation behandelt.
Die Lösung zur Analyse von UML Aktivitäten auf Knoten, die möglicherweise parallel zu sich selbst oder anderen Knoten ausgeführt werden, basiert auf einer formalen Grundlage aus dem Bereich der Graphentheorie. Eines der Hauptprobleme, welches gelöst wurde, war die Verarbeitung von Zyklen innerhalb der UML Aktivitäten. Diese Dissertation führt ein dynamisches Limit für die Anzahl an Zyklusdurchläufen ein, welches die Elemente jeder zu analysierenden UML Aktivität sowie deren Semantiken berücksichtigt. Ebenso werden formale Beweise präsentiert in Bezug auf die Erstellung gerichteter azyklischer Graphen, sowie deren Analyse zur Identifizierung von Elementen, die parallel zu anderen Elementen ausgeführt werden können. Auf Basis einer Untersuchung von Charakteristiken von data races sowie Meldungen von data races werden die Ergebnisse der dynamischen Erkennung von data races vorverarbeitet, und das Ergebnis der Vorverarbeitung gemeldeter data races wird Benutzern zur weiteren Analyse präsentiert.
Diese Dissertation umfasst weiterhin eine exemplarische Anwendung der Lösungsidee und der Analyse von UML Aktivitäten, sowie eine exemplarische Untersuchung der Effizienzsteigerung der dynamischen Erkennung von data races. Letztere zeigte eine Reduktion des Mehraufwands an Laufzeit von 44% bei fokussierter Instrumentierung im Vergleich zu voller Instrumentierung auf. Abschließend wurde ein kontrolliertes Experiment aufgesetzt und durchgeführt, um die Effekte der Vorverarbeitung gemeldeter data races auf die Effizienz der Analyse dieser gemeldeten data races zu untersuchen. Die Ergebnisse zeigen, dass die in dieser Dissertation vorgestellte Lösung verglichen mit traditionellen Ansätzen Effizienzsteigerungen in der Analyse gemeldeter data races von 190% bis zu 660% ermöglicht.
Abschließend werden Möglichkeiten für zukünftige Arbeiten vorgestellt, welche eine breitere Anwendung der Ergebnisse dieser Dissertation ebenso wie weitere Effizienzsteigerungen im Qualitätssicherungsprozess für data races ermöglichen können
Embedded System Design
A unique feature of this open access textbook is to provide a comprehensive introduction to the fundamental knowledge in embedded systems, with applications in cyber-physical systems and the Internet of things. It starts with an introduction to the field and a survey of specification models and languages for embedded and cyber-physical systems. It provides a brief overview of hardware devices used for such systems and presents the essentials of system software for embedded systems, including real-time operating systems. The author also discusses evaluation and validation techniques for embedded systems and provides an overview of techniques for mapping applications to execution platforms, including multi-core platforms. Embedded systems have to operate under tight constraints and, hence, the book also contains a selected set of optimization techniques, including software optimization techniques. The book closes with a brief survey on testing. This fourth edition has been updated and revised to reflect new trends and technologies, such as the importance of cyber-physical systems (CPS) and the Internet of things (IoT), the evolution of single-core processors to multi-core processors, and the increased importance of energy efficiency and thermal issues
Transparent resilience for Chapel
High-performance systems pose a number of challenges to traditional fault tolerance
approaches. The exponential increase of core numbers in large-scale distributed
systems exposes the growth of permanent, intermittent, and transient faults. The
redundancy schemes in use increase the number of system resources dedicated to
recovery, while the extensive use of silent-failure mode inhibits systems’ capability
to detect faults that hinder application progress. As parallel computation strives to
survive the high failure rates, software shifts focus towards the support of resilience.
The thesis proposes a mechanism for resilience support for Chapel, the high performance language developed by Cray. We investigate the potential for embedded
transparent resilience, to assist uninterrupted program completion on distributed
hardware, in the event of component failures. Our goal is to achieve graceful degradation; continued application execution when nodes in the system suffer fatal failures. We aim to provide a resilience-enabled version of the language, without application code modifications. We focus on Chapel’s task- and data-parallel constructs,
and enhance their functionality with mechanisms to support resilience.
In particular, we build on existing language constructs that facilitate parallel execution in Chapel. We focus on constructs that introduce unstructured and structured parallelism and constructs that introduce locality, as derived by the Partitioned Global Address Space programming model. Furthermore, we expand the
resilient support to cover data distributions on library-level.
The core implementation is on the runtime level, primarily on Chapels tasking and communication layers; we introduce mechanisms to support automatic task
adoption and recovery by guiding the control to perform task re-execution. On the
data-parallel track, we propose a resilience enabled version of the Block data distribution module. We develop an in-memory data redundancy mechanism, exploiting
Chapel’s concept of locales. We apply the concept of buddy locales, as the primary
means to store data redundantly and adopt remote workload from failed locales.
We evaluate our resilient task-parallel mechanism with respect to the overheads
introduced by embedded resilience. We use a set of constructed micro-benchmarks
to evaluate the resilient task-parallel implementation, while for the evaluation of
resilient data-parallelism we demonstrate results on the STREAM triad benchmark
and the N-body all-pairs algorithm, on a 32-node Beowulf cluster. In order to assist
the evaluation, we develop an error injection interface to simulate node failures.Heriot-Watt University James Watt Scholarshi
Mechanising an algebraic rely-guarantee refinement calculus
PhD ThesisDespite rely-guarantee (RG) being a well-studied program logic established in the 1980s, it
was not until recently that researchers realised that rely and guarantee conditions could be
treated as independent programming constructs. This recent reformulation of RG paved the
way to algebraic characterisations which have helped to better understand the difficulties that
arise in the practical application of this development approach.
The primary focus of this thesis is to provide automated tool support for a rely-guarantee
refinement calculus proposed by Hayes et. al., where rely and guarantee are defined as
independent commands. Our motivation is to investigate the application of an algebraic
approach to derive concrete examples using this calculus. In the course of this thesis, we
locate and fix a few issues involving the refinement language, its operational semantics and
preexisting proofs. Moreover, we extend the refinement calculus of Hayes et. al. to cover
indexed parallel composition, non-atomic evaluation of expressions within specifications,
and assignment to indexed arrays. These extensions are illustrated via concrete examples.
Special attention is given to design decisions that simplify the application of the mechanised
theory. For example, we leave part of the design of the expression language on the
hands of the user, at the cost of the requiring the user to define the notion of undefinedness
for unary and binary operators; and we also formalise a notion of indexed parallelism that is
parametric on the type of the indexes, this is done deliberately to simplify the formalisation of
algorithms. Additionally, we use stratification to reduce the number of cases in in simulation
proofs involving the operational semantics. Finally, we also use the algebra to discuss the
role of types in program derivation
IST Austria Thesis
In this thesis we present a computer-aided programming approach to concurrency. Our approach helps the programmer by automatically fixing concurrency-related bugs, i.e. bugs that occur when the program is executed using an aggressive preemptive scheduler, but not when using a non-preemptive (cooperative) scheduler. Bugs are program behaviours that are incorrect w.r.t. a specification. We consider both user-provided explicit specifications in the form of assertion
statements in the code as well as an implicit specification. The implicit specification is inferred from the non-preemptive behaviour. Let us consider sequences of calls that the program makes to an external interface. The implicit specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We consider several semantics-preserving fixes that go beyond atomic sections typically explored in the synchronisation synthesis literature. Our synthesis is able to place locks, barriers and wait-signal statements and last, but not least reorder independent statements. The latter may be useful if a thread is released to early, e.g., before some initialisation is completed. We guarantee that our synthesis does not introduce deadlocks and that the synchronisation inserted is optimal w.r.t. a given objective function. We dub our solution trace-based synchronisation synthesis and it is loosely based on counterexample-guided inductive synthesis (CEGIS). The synthesis works by discovering a trace that is incorrect w.r.t. the specification and identifying ordering constraints crucial to trigger the specification violation. Synchronisation may be placed immediately (greedy approach) or delayed until all incorrect traces are found (non-greedy approach). For the non-greedy approach we construct a set of global constraints over synchronisation placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronisation placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronisation solution. We evaluate our approach on a number of realistic (albeit simplified) Linux device-driver
benchmarks. The benchmarks are versions of the drivers with known concurrency-related bugs. For the experiments with an explicit specification we added assertions that would detect the bugs in the experiments. Device drivers lend themselves to implicit specification, where the device and the operating system are the external interfaces. Our experiments demonstrate that our synthesis method is precise and efficient. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronisation placements are produced for our experiments, favouring e.g. a minimal number of synchronisation operations or maximum concurrency
model checking mutual exclusion algorithms using uppaal
This paper proposes an approach to modelling and exhaustive verification of mutual exclusion algorithms which is based on Timed Automata in the context of the popular Uppaal toolbox. The approach makes it possible to study the properties of a mutual exclusion algorithm also in the presence of the time dimension. For demonstration purposes some historical algorithms are modelled and thoroughly analyzed, going beyond some informal reasoning reported in the literature. The paper also proposes a mutual exclusion algorithm for processes whose model checking confirms it satisfies all the required properties
Modelling and verification of starvation-free mutual exclusion algorithms based on weak semaphores
This paper proposes an original framework for modelling and verification (M&V) of starvation-free mutual exclusion algorithms based on weak semaphores, that are without a built-in waiting-process queue. The goal is to support the implementation of light-weight starvation-free semaphores useful in general concurrent systems including cyber physical systems. The M&V approach depends on UPPAAL. First known weak semaphores are modelled. Then they are exploited for model checking classic algorithms. Known properties are retrieved but subtle new ones are discovered. As part of the developed approach, a new algorithm is proposed which uses two semaphores of the weakest type, N bits (N being the number of processes) and a counter. This algorithm too is proved to be correct
Actor-Oriented Programming for Resource Constrained Multiprocessor Networks on Chip
Multiprocessor Networks on Chip (MPNoCs) are an attractive architecture for integrated circuits as they can benefit from the improved performance of ever smaller transistors but are not severely constrained by the poor performance of global on-chip wires. As the number of processors increases it becomes ever more expensive to provide coherent shared memory but this is a foundational assumption of thread-level parallelism. Threaded models of concurrency cannot efficiently address architectures where shared memory is not coherent or does not exist.
In this thesis an extended actor oriented programming model is proposed to enable the design of complex and general purpose software for highly parallel and decentralised multiprocessor architectures. This model requires the encapsulation of an execution context and state into isolated Machines which may only initiate communication with one another via explicitly named channels. An emphasis on message passing and strong isolation of computation encourages application structures that are congruent with the nature of non-shared memory multiprocessors, and the model also avoids creating dependences on specific hardware topologies.
A realisation of the model called Machine Java is presented to demonstrate the applicability of the model to a general purpose programming language. Applications designed with this framework are shown to be capable of scaling to large numbers of processors and remain independent of the hardware targets. Through the use of an efficient compilation technique, Machine Java is demonstrated to be portable across several architectures and viable even in the highly constrained context of an FPGA hosted MPNoC
- …