48 research outputs found

    Improving WCET Evaluation using Linear Relation Analysis

    Get PDF
    International audienceThe precision of a worst case execution time (WCET) evaluation tool on a given program is highly dependent on how the tool is able to detect and discard semantically infeasible executions of the program. In this paper, we propose to use the classical abstract interpretation-based method of linear relation analysis to discover and exploit relations between execution paths. For this purpose, we add auxiliary variables (counters) to the program to trace its execution paths. The results are easily incorporated in the classical workflow of a WCET evaluator, when the evaluator is based on the popular implicit path enumeration technique. We use existing tools-a WCET evaluator and a linear relation analyzer-to build and experiment a prototype implementation of this idea. * This work is supported by the French research fundation (ANR) as part of the W-SEPT project (ANR-12-INSE-0001

    Constant-Loop Dominators for Single-Path Code Optimization

    Get PDF
    Single-path code is a code generation technique specifically designed for real-time systems. It guarantees that programs execute the same instruction sequence regardless of runtime conditions. Single-path code uses loop bounds to ensure all loops iterate a fixed number of times equal to their upper loop bound. When the lower and upper bounds are equal, the loop must iterate the same number of times, which we call a constant loop. In this paper, we present the constant-loop dominance relation on control-flow graphs. It is a variation of the traditional dominance relation that considers constant loops to find basic blocks that are always executed the same number of times. Using this relation, we present an optimization that reduces the code needed to manage single-path code. Our evaluation shows significant performance improvements, with one example of up to 90%, with mostly minor effects on code size

    Timing model derivation : pipeline analyzer generation from hardware description languages

    Get PDF
    Safety-critical systems are forced to finish their execution within strict deadlines so that worst-case execution time (WCET) guarantees are a crucial part of their verification. Timing models of the analyzed hardware form the basis for static analysis-based approaches like the aiT WCET analyzer. Currently, timing models are hand-crafted based on frequently incorrect documentation causing the process to be error-prone and time-consuming. This thesis bridges the gap between automatic hardware synthesis and WCET analysis development by introducing a process for the derivation of timing models from VHDL specifications. We propose a set of transformations and abstractions to reduce the hardware design\u27s complexity enabling the generation of efficient and provably correct WCET analyzers. They employ an abstract interpretation-based simulation of program executions based on a defined abstract simulation semantics. We have defined workflow patterns showing how to gradually apply the derivation process to VHDL models, thereby removing timing-irrelevant constructs. Interval property checking is used to validate the transformations. A further contribution of this thesis is the implementation of a tool set that realizes the introduced derivation process and shows its applicability to non-trivial industrial designs in experimental evaluations. Influences on design choices to the quality of the derived timing model are presented building an informal predictability notion for VHDL.Sicherheits-kritische Systeme unterliegen oft der Einhaltung strikter Laufzeitschranken, weshalb zur Verifikation sichere Obergrenzen der Laufzeit im schlimmsten Fall (WCET) bestimmt werden. Zeitmodelle der analysierten Hardware sind hierbei die Grundlage für auf statischen Analysen basierende Verfahren. Aktuell werden solche Modelle händisch aus Handbüchern extrahiert, ein sehr zeitaufwändiger und fehleranfälliger Prozess. Diese Arbeit schlägt eine Brücke zwischen automatischer Hardware-Synthese und der Entwicklung von WCET-Analysen durch die Einführung eines Ableitungsprozesses von Zeitmodellen aus VHDL-Spezifikationen. Transformationen und Abstraktionen werden zur Komplexitätsreduktion eingesetzt, um die Erzeugung von effizienten und beweisbar korrekten Analysatoren zu ermöglichen. Selbige bedienen sich abstrakter Interpretation von Programmausführungen basierend auf einer Simulations-Semantik. Definierte Arbeitsabläufe zeigen, wie man die Ableitung schrittweise auf VHDL-Modellen umsetzt und dadurch für das Zeitverhalten irrelevante Teile des Modells entfernt. Interval Property Checking gewährleistet hierbei, dass die Transformationen semantik-erhaltend sind. Eine Tool-Implementierung realisiert den vorgestellen Ableitungsprozess und unterstreicht seine Anwendbarkeit auf komplexe industrielle Designs durch experimentelle untersuchungen. Außerdem werden VHDL-Designentscheidungen hinsicht ihres Einflusses auf die Qualität des abgeleiteten Zeitmodells betrachtet

    Integrated timing verification for distributed embedded real-time systems

    Get PDF
    More and more parts of our lives are controlled by software systems that are usually not recognised as such. This is due to the fact that they are embedded in non-computer systems, like washing machines or cars. A modern car, for example, is controlled by up to 80 electronic control units (ECU). Most of these ECUs do not just have to fulfil functional correctness requirements but also have to execute a control action within a given time bound. An airbag, for example, does not work correctly if it is triggered a single second too late. These so-called real-time properties have to be verified for safety-critical systems as well as for non-safety-critical real-time systems. The growing distribution of functions over several ECUs increases the amount of complex dependencies in the entire automotive system. Therefore, an integrated approach for timing verification on all development levels (System, ECU, Software, etc.) and in all development phases is necessary. Today's most often used timing analysis method - the timing measurement of a system under test - is insufficient in many respects. First of all, it is very unlikely to find the actual worst-case response times this way. Furthermore, only the consequences of time consumption can thus be detected but not the potentially very complex causes for the consumption itself. The complexity of timing behaviour is one reason for the often late and thus expensive detection of timing problems in the development process. In contrast to measurement with the mentioned drawbacks, there is the static timing verification which exists since many years and is applicable with commercial tools. This thesis studies the current problems of industrial applicability of the static timing analysis (effort, imprecision, over-estimation, etc.) and solves them by process integration and the development of new analysis methods. In order to show the real benefit of the proposed methods, the approach will be demonstrated using an industrial example at every development stage.Unser tägliches Leben wird immer stärker von Software-Systemen durchdrungen, die oftmals nicht als solche wahrgenommen werden, da sie in Nicht-Computer-Systeme (Waschmaschinen, Autos, usw.) eingebettet sind. So arbeiten in einem aktuellen PKW bis zu 80 Steuergeräte. Diese müssen in vielen Fällen nicht nur funktional korrekt arbeiten, sondern eine geforderte Berechnung auch innerhalb vorgegebener Zeitschranken ausführen. Ein Airbag erfüllt seine Aufgabe beispielsweise nicht, wenn er auch nur eine Sekunde zu spät ausgelöst wird. Die so genannten Echtzeiteigenschaften müssen für sicherheitskritische Anwendungen und soweit wie möglich auch für alle anderen Echtzeitsysteme, abgesichert werden. Insbesondere sorgt die steigende Verteilung von Funktionen über mehrere Steuergeräte hinweg zunehmend für komplexe Abhängigkeiten im gesamten Fahrzeugsystem. Dies macht eine im Entwicklungsprozess und auf allen Abstraktionsebenen der Entwicklung (System, Steuergeräte, Software, usw.) durchgängige Methodik der Zeitverifikation notwendig. Das heute übliche Verfahren der Zeitmessung von Systemen während der Testdurchführung ist in vielerlei Hinsicht ungenügend. Zum einen werden die tatsächlichen Grenzwerte nur mit sehr geringer Wahrscheinlichkeit erreicht. Zum anderen werden auf diese Weise nur die Auswirkungen von Zeitverbräuchen gemessen, nicht aber deren Ursachen analysiert, die möglicherweise sehr komplex sein können. Dies führt auch dazu, dass Probleme erst spät im Entwicklungsprozess erkannt und folglich nur mit hohen Kosten behoben werden können. Neben den Zeitmessungen mit den genannten Nachteilen gibt es die statische Zeitverifikation. Diese ist bereits seit vielen Jahren bekannt und auch über entsprechende Werkzeuge einsetzbar. In der vorliegenden Dissertation werden die Probleme der industriellen Anwendbarkeit der statischen Zeitverifikation (Aufwand, Ungenauigkeit, Überschätzung, usw.) untersucht und mit einer durchgängigen Prozessintegration sowie der Entwicklung neuer Analyse-Methoden gelöst. Der hier vorgestellte Ansatz wird deshalb in jedem Schritt mit einem Beispiel aus der Industrie dargestellt und geprüft

    On the limits of probabilistic timing analysis

    Get PDF
    Over the last years, we are witnessing the steady and rapid growth of Critica! Real-Time Embedded Systems (CRTES) industries, such as automotive and aerospace. Many of the increasingly-complex CRTES' functionalities that are currently implemented with mechanical means are moving towards to an electromechanical implementation controlled by critica! software. This trend results in a two-fold consequence. First, the size and complexity of critical-software increases in every new embedded product. And second, high-performance hardware features like caches are more frequently used in real-time processors. The increase in complexity of CRTES challenges the validation and verification process, a necessary step to certify that the system is safe for deployment. Timing validation and verification includes the computation of the Worst-Case Execution Time (WCET) estimates, which need to be trustworthy and tight. Traditional timing analysis are challenged by the use of complex hardware/software, resulting in low-quality WCET estimates, which tend to add significant pessimism to guarantee estimates' trustworthiness. This calls for new solutions that help tightening WCET estimates in a safe manner. In this Thesis, we investigate the novel Measurement-Based Probabilistic Timing Analysis (MBPTA), which in its original version already shows potential to deliver trustworthy and tight WCETs for tasks running on complex systems. First, we propose a methodology to assess and ensure that ali cache memory layouts, which can significantly impact WCET, have been adequately factored in the WCET estimation process. Second, we provide a solution to achieve simultaneously cache representativeness and full path coverage. This solution provides evidence proving that WCET estimates obtained are valid for ali program execution paths regardless of how code and data are laid out in the cache. Lastly, we analyse and expose the main misconceptions and pitfalls that can prevent a sound application of WCET analysis based on extreme value theory, which is used as part of MBPTA.En los últimos años, se ha podido observar un crecimiento rápido y sostenido de la industria de los sistemas embebidos críticos de tiempo real (abreviado en inglés CRTES}, como por ejemplo la industria aeronáutica o la automovilística. En un futuro cercano, muchas de las funcionalidades complejas que actualmente se están implementando a través de sistemas mecánicos en los CRTES pasarán a ser controladas por software crítico. Esta tendencia tiene dos consecuencias claras. La primera, el tamaño y la complejidad del software se incrementará en cada nuevo producto embebido que se lance al mercado. La segunda, las técnicas hardware destinadas a alto rendimiento (por ejemplo, memorias caché) serán usadas más frecuentemente en los procesadores de tiempo real. El incremento en la complejidad de los CRTES impone un reto en los procesos de validación y verificación de los procesadores, un paso imprescindible para certificar que los sistemas se pueden comercializar de forma segura. La validación y verificación del tiempo de ejecución incluye la estimación del tiempo de ejecución en el peor caso (abreviado en inglés WCET}, que debe ser precisa y certera. Desafortunadamente, los procesos tradicionales para analizar el tiempo de ejecución tienen problemas para analizar las complejas combinaciones entre el software y el hardware, produciendo estimaciones del WCET de mala calidad y conservadoras. Para superar dicha limitación, es necesario que florezcan nuevas técnicas que ayuden a proporcionar WCET más precisos de forma segura y automatizada. En esta Tesis se profundiza en la investigación referente al análisis probabilístico de tiempo de ejecución basado en medidas (abreviado en inglés MBPTA), cuyas primeras implementaciones muestran potencial para obtener un WCET preciso y certero en tareas ejecutadas en sistemas complejos. Primero, se propone una metodología para certificar que todas las distribuciones de la memoria caché, una de las estructuras más complejas de los CRTES, han sido contabilizadas adecuadamente durante el proceso de estimación del WCET. Segundo, se expone una solución para conseguir a la vez representatividad en la memoria caché y cobertura total en caminos críticos del programa. Dicha solución garantiza que la estimación WCET obtenida es válida para todos los caminos de ejecución, independientemente de como el código y los datos se guardan en la memoria caché. Finalmente, se analizan y discuten los mayores malentendidos y obstáculos que pueden prevenir la aplicabilidad del análisis de WCET basado en la teoría de valores extremos, la cual forma parte del MBPTA.Postprint (published version

    Constraint-Based Testing and Tail Tests for Measurement-Based Probabilistic Timing Analysis

    Get PDF
    The Worst-Case Execution Time (WCET) of tasks is an important data to give confidence that Real Time Systems will meet its timing requirements. Unfortunately, due to its tractability, this data is generally unknown. Measurement-Based Timing Analysis (MBTA), which relies on observing execution times driven by test data, has become a promising approach in the recent years. Some of the current testing techniques may take a relative long time at triggering decisions because they require very specific data of the input space. Conversely, Constraint-Based Testing (CBT) can cope better with these decisions as well as being more eficient. State-of-the-art approaches integrating CBT with MBTA have applied code coverage metrics designed for functional testing but not for WCET. Therefore, important functions such as generating test data for a path potentially leading to a large execution time are omitted. A central contribution of this work embraces CBT. Its objective is to meet code coverage needs for WCET. The evaluation compares this approach to state-of-theart Search-Based Testing (SBT) in MBTA and Random Testing (RT) methods and shows that, in most cases, CBT not only does not underestimate the largest observed execution time but also it achieves this data earlier. A downside of MBTA is that it normally underestimates the WCET. To face this issue execution time data is recently combined with probabilistic models. The current probabilistic protocol is hard to automate. Others probabilistic approaches have found alternative ways to achieve similar results automatically. The second main contribution aims for integrating this latter protocol and evaluating its applicability. The evaluation, which uses execution time data from test generators, shows that SBT and RT are more likely to generate data that enable this new approach. The WCET predictions are found more accurate by definition

    Enabling caches in probabilistic timing analysis

    Get PDF
    Hardware and software complexity of future critical real-time systems challenges the scalability of traditional timing analysis methods. Measurement-Based Probabilistic Timing Analysis (MBPTA) has recently emerged as an industrially-viable alternative technique to deal with complex hardware/software. Yet, MBPTA requires certain timing properties in the system under analysis that are not satisfied in conventional systems. In this thesis, we introduce, for the first time, hardware and software solutions to satisfy those requirements as well as to improve MBPTA applicability. We focus on one of the hardware resources with highest impact on both average performance and Worst-Case Execution Time (WCET) in current real-time platforms, the cache. In this line, the contributions of this thesis follow three different axes: hardware solutions and software solutions to enable MBPTA, and MBPTA analysis enhancements in systems featuring caches. At hardware level, we set the foundations of MBPTA-compliant processor designs, and define efficient time-randomised cache designs for single- and multi-level hierarchies of arbitrary complexity, including unified caches, which can be time-analysed for the first time. We propose three new software randomisation approaches (one dynamic and two static variants) to control, in an MBPTA-compliant manner, the cache jitter in Commercial off-the-shelf (COTS) processors in real-time systems. To that end, all variants randomly vary the location of programs' code and data in memory across runs, to achieve probabilistic timing properties similar to those achieved with customised hardware cache designs. We propose a novel method to estimate the WCET of a program using MBPTA, without requiring the end-user to identify worst-case paths and inputs, improving its applicability in industry. We also introduce Probabilistic Timing Composability, which allows Integrated Systems to reduce their WCET in the presence of time-randomised caches. With the above contributions, this thesis pushes the limits in the use of complex real-time embedded processor designs equipped with caches and paves the way towards the industrialisation of MBPTA technology.La complejidad de hardware y software de los sistemas críticos del futuro desafía la escalabilidad de los métodos tradicionales de análisis temporal. El análisis temporal probabilístico basado en medidas (MBPTA) ha aparecido últimamente como una solución viable alternativa para la industria, para manejar hardware/software complejo. Sin embargo, MBPTA requiere ciertas propiedades de tiempo en el sistema bajo análisis que no satisfacen los sistemas convencionales. En esta tesis introducimos, por primera vez, soluciones hardware y software para satisfacer estos requisitos como también mejorar la aplicabilidad de MBPTA. Nos centramos en uno de los recursos hardware con el máximo impacto en el rendimiento medio y el peor caso del tiempo de ejecución (WCET) en plataformas actuales de tiempo real, la cache. En esta línea, las contribuciones de esta tesis siguen 3 ejes distintos: soluciones hardware y soluciones software para habilitar MBPTA, y mejoras de el análisis MBPTA en sistemas usado caches. A nivel de hardware, creamos las bases del diseño de un procesador compatible con MBPTA, y definimos diseños de cache con tiempo aleatorio para jerarquías de memoria con uno y múltiples niveles de cualquier complejidad, incluso caches unificadas, las cuales pueden ser analizadas temporalmente por primera vez. Proponemos tres nuevos enfoques de aleatorización de software (uno dinámico y dos variedades estáticas) para manejar, en una manera compatible con MBPTA, la variabilidad del tiempo (jitter) de la cache en procesadores comerciales comunes en el mercado (COTS) en sistemas de tiempo real. Por eso, todas nuestras propuestas varían aleatoriamente la posición del código y de los datos del programa en la memoria entre ejecuciones del mismo, para conseguir propiedades de tiempo aleatorias, similares a las logradas con diseños hardware personalizados. Proponemos un nuevo método para estimar el WCET de un programa usando MBPTA, sin requerir que el usuario dentifique los caminos y las entradas de programa del peor caso, mejorando así la aplicabilidad de MBPTA en la industria. Además, introducimos la composabilidad de tiempo probabilística, que permite a los sistemas integrados reducir su WCET cuando usan caches de tiempo aleatorio. Con estas contribuciones, esta tesis empuja los limites en el uso de diseños complejos de procesadores empotrados en sistemas de tiempo real equipados con caches y prepara el terreno para la industrialización de la tecnología MBPTA

    Real-time analysis of MPI programs for NoC-based many-cores using time division multiplexing

    Get PDF
    Worst-case execution time (WCET) analysis is crucial for designing hard real-time systems. While the WCET of tasks in a single core system can be upper bounded in isolation, the tasks in a many-core system are subject to shared memory interferences which impose high overestimation of the WCET bounds. However, many-core-based massively parallel applications will enter the area of real-time systems in the years ahead. Explicit message-passing and a clear separation of computation and communication facilitates WCET analysis for those programs. A standard programming model for message-based communication is the message passing interface (MPI). It provides an application independent interface for different standard communication operations (e.g. broadcast, gather, ...). Thereby, it uses efficient communication patterns with deterministic behaviour. In applying these known structures, we target to provide a WCET analysis for communication that is reusable for different applications if the communication is executed on the same underlying platform. Hence, the analysis must be performed once per hardware platform and can be reused afterwards with only adapting several parameters such as the number of nodes participating in that communication. Typically, the processing elements of many-core platforms are connected via a Network-on-Chip (NoC) and apply techniques such as time-division multiplexing (TDM) to provide guaranteed services for the network. Hence, the hardware and the applied technique for guaranteed service needs to facilitate this reusability of the analysis as well. In this work we review different general-purpose TDM schedules that enable a WCET approximation independent of the placement of tasks on processing elements of a many-core which uses a NoC with torus topology. Furthermore, we provide two new schedules that show a similar performance as the state-of-the-art schedules but additionally serve situations where the presented state-of-the-art schedules perform poorly. Based on these schedules a procedure for the WCET analysis of the communication patterns used in MPI is proposed. Finally, we show how to apply the results of the analysis to calculate the WCET upper bound for a complete MPI program. Detailed insights in the performance of the applied TDM schedules are provided by comparing the schedules to each other in terms of timing. Additionally, we discuss the exhibited timing of the general-purpose schedules compared to a state-of-the-art application specific TDM schedule to put in relation both types of schedules. We apply the proposed procedure to several standard types of communication provided in MPI and compare different patterns that are used to implement a specific communication. Our evaluation investigates the communications’ building blocks of the timing bounds and shows the tremendous impact of choosing the appropriate communication pattern. Finally, a case study demonstrates the application of the presented procedure to a complete MPI program. With the method proposed in this work it is possible to perform a reusable WCET timing analysis for the communication in a NoC that is independent of the placement of tasks on the chip. Moreover, as the applied schedules are not optimized for a specific application but can be used for all applications in the same way, there are only marginal changes in the timing of the communication when the software is adapted or updated. Thus, there is no need to perform the timing analysis from scratch in such cases

    Assessing the Use of Machine Learning to Find the Worst-Case Execution Time of Avionics Software

    Get PDF
    FA8702-15-D-0002Many parts in aircraft today rely on software that interacts with its physical environment. Typically, this interaction involves taking sensor readings, sending actuation commands, reading commands from humans, and presenting information to humans. These interactions require that the software deliver results at the right time,\uf020as argued in the guidance document DO-178C and in previous FAA reports. Correct timing, in turn, depends on the execution time of individual programs. Hence, the problem of finding an upper bound on the execution time of a program,\uf020called Worst-Case Execution Time (WCET) analysis,\uf020is an important step in avionics certification. Unfortunately, WCET analysis is difficult for several reasons. A program can typically execute a large number of different paths. During the execution of one path, the program uses resources in a complex way and this resource use is difficult to analyze. Finally, during the execution of one path, the speed of execution depends on temperature, which, in turn, depends on earlier execution. This report presents research on WCET analysis using Machine Learning (ML) and Artificial Intelligence (AI) aiming to make WCET analysis less dependent on detailed knowledge of the program that is analyzed and the hardware used

    Ernst Denert Award for Software Engineering 2020

    Get PDF
    This open access book provides an overview of the dissertations of the eleven nominees for the Ernst Denert Award for Software Engineering in 2020. The prize, kindly sponsored by the Gerlind & Ernst Denert Stiftung, is awarded for excellent work within the discipline of Software Engineering, which includes methods, tools and procedures for better and efficient development of high quality software. An essential requirement for the nominated work is its applicability and usability in industrial practice. The book contains eleven papers that describe the works by Jonathan Brachthäuser (EPFL Lausanne) entitled What You See Is What You Get: Practical Effect Handlers in Capability-Passing Style, Mojdeh Golagha’s (Fortiss, Munich) thesis How to Effectively Reduce Failure Analysis Time?, Nikolay Harutyunyan’s (FAU Erlangen-Nürnberg) work on Open Source Software Governance, Dominic Henze’s (TU Munich) research about Dynamically Scalable Fog Architectures, Anne Hess’s (Fraunhofer IESE, Kaiserslautern) work on Crossing Disciplinary Borders to Improve Requirements Communication, Istvan Koren’s (RWTH Aachen U) thesis DevOpsUse: A Community-Oriented Methodology for Societal Software Engineering, Yannic Noller’s (NU Singapore) work on Hybrid Differential Software Testing, Dominic Steinhofel’s (TU Darmstadt) thesis entitled Ever Change a Running System: Structured Software Reengineering Using Automatically Proven-Correct Transformation Rules, Peter Wägemann’s (FAU Erlangen-Nürnberg) work Static Worst-Case Analyses and Their Validation Techniques for Safety-Critical Systems, Michael von Wenckstern’s (RWTH Aachen U) research on Improving the Model-Based Systems Engineering Process, and Franz Zieris’s (FU Berlin) thesis on Understanding How Pair Programming Actually Works in Industry: Mechanisms, Patterns, and Dynamics – which actually won the award. The chapters describe key findings of the respective works, show their relevance and applicability to practice and industrial software engineering projects, and provide additional information and findings that have only been discovered afterwards, e.g. when applying the results in industry. This way, the book is not only interesting to other researchers, but also to industrial software professionals who would like to learn about the application of state-of-the-art methods in their daily work
    corecore