51 research outputs found

    PRECISE YET SCALABLE RESOURCE ANALYSIS VIA SYMBOLIC EXECUTION

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    How to Compute Worst-Case Execution Time by Optimization Modulo Theory and a Clever Encoding of Program Semantics

    No full text
    International audienceIn systems with hard real-time constraints, it is necessary to compute upper bounds on the worst-case execution time (WCET) of programs; the closer the bound to the real WCET, the better. This is especially the case of synchronous reactive control loops with a fixed clock; the WCET of the loop body must not exceed the clock period. We compute the WCET (or at least a close upper bound thereof) as the solution of an optimization modulo theory problem that takes into account the semantics of the program, in contrast to other methods that compute the longest path whether or not it is feasible according to these semantics. Optimization modulo theory extends satisfiability modulo theory (SMT) to maximization problems. Immediate encodings of WCET problems into SMT yield formulas intractable for all current production-grade solvers; this is inherent to the DPLL(T) approach to SMT implemented in these solvers. By conjoining some appropriate "cuts" to these formulas, we considerably reduce the computation time of the SMT-solver. We experimented our approach on a variety of control programs, using the OTAWA analyzer both as baseline and as underlying microarchitectural analysis for our analysis, and show notable improvement on the WCET bound on a variety of benchmarks and control programs

    TIME-PREDICTABLE EXECUTION OF EMBEDDED SOFTWARE ON MULTI-CORE PLATFORMS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Novel WCET semantics of Synchronous Programs

    Get PDF

    Improving the Precision of Abstract Interpretation Based Cache Persistence Analysis

    Full text link
    When designing hard real-time embedded systems, it is required to estimate the worst-case execution time (WCET) of each task for schedulability analysis. Precise cache persistence analysis can significantly tighten the WCET estimation, especially when the program has many loops. Methods for persistence analysis should safely and precisely classify memory references as persistent. Ex-isting safe approaches suffer from multiple sources of pessimism and may not provide precise results. In this paper, we first identify some sources of pessimism that two recent approaches based on younger set and may analysis may encounter. Then, we propose two methods to eliminate these sources of pessimism. The first method improves the update function of the may analysis-based approach; and the second method integrates the younger set-based and may analysis-based approaches together to further reduce pes-simism. We also prove the two proposed methods are still safe. We evaluate the approaches on a set of benchmarks and observe the number of memory references classified as persistent is increased by the proposed methods. Moreover, we empirically compare the storage space and analysis time used by different methods

    Worst-Case Execution Time Analysis for C++ based Real-Time On-Board Software Systems

    Get PDF
    Autonomous systems are today’s trend in the aerospace domain. These systems require more on-board data processing capabilities. They follow data-flow programming, and have similar software architecture. Developing a framework that is applicable for these architectures reduces the development efforts and improves the re-usability. However, its design’s essential requirement is to use a programming language that can offer both abstraction and static memory capabilities. As a result, C++ was chosen to develop the Tasking Framework, which is used to develop on-board data-flow-oriented applications. Validating the timing requirements for such a framework is a long, complicated process. Estimating the worst-case execution time (WCET) is the first step within this process. Thus, in this thesis, we focus on performing WCET analysis for C++ model-based applications developed by the Tasking Framework. This work deals with two main challenges that emerged from using C++: using objects impose the need for a memory model and using virtual methods implicate indirect jumps. To this end, we developed a tool based on symbolic execution that can handle both challenges. The tool showed high precision of early 90 % in bounding loops of the Benchmark suit. We then integrated our advanced analysis with an open toolbox for adaptive WCET analysis. Finally, we evaluated our approach for estimating the WCET for tasks developed by the Tasking Framework

    Timing model derivation : static analysis of hardware description languages

    Get PDF
    Safety-critical hard real-time systems are subject to strict timing constraints. In order to derive guarantees on the timing behavior, the worst-case execution time (WCET) of each task comprising the system has to be known. The aiT tool has been developed for computing safe upper bounds on the WCET of a task. Its computation is mainly based on abstract interpretation of timing models of the processor and its periphery. These models are currently hand-crafted by human experts, which is a time-consuming and error-prone process. Modern processors are automatically synthesized from formal hardware specifications. Besides the processor’s functional behavior, also timing aspects are included in these descriptions. A methodology to derive sound timing models using hardware specifications is described within this thesis. To ease the process of timing model derivation, the methodology is embedded into a sound framework. A key part of this framework are static analyses on hardware specifications. This thesis presents an analysis framework that is build on the theory of abstract interpretation allowing use of classical program analyses on hardware description languages. Its suitability to automate parts of the derivation methodology is shown by different analyses. Practical experiments demonstrate the applicability of the approach to derive timing models. Also the soundness of the analyses and the analyses’ results is proved.Sicherheitskritische Echtzeitsysteme unterliegen strikten Zeitanforderungen. Um ihr Zeitverhalten zu garantieren müssen die Ausführungszeiten der einzelnen Programme, die das System bilden, bekannt sein. Um sichere obere Schranken für die Ausführungszeit von Programmen zu berechnen wurde aiT entwickelt. Die Berechnung basiert auf abstrakter Interpretation von Zeitmodellen des Prozessors und seiner Peripherie. Diese Modelle werden händisch in einem zeitaufwendigen und fehleranfälligen Prozess von Experten entwickelt. Moderne Prozessoren werden automatisch aus formalen Spezifikationen erzeugt. Neben dem funktionalen Verhalten beschreiben diese auch das Zeitverhalten des Prozessors. In dieser Arbeit wird eine Methodik zur sicheren Ableitung von Zeitmodellen aus der Hardwarespezifikation beschrieben. Um den Ableitungsprozess zu vereinfachen ist diese Methodik in eine automatisierte Umgebung eingebettet. Ein Hauptbestandteil dieses Systems sind statische Analysen auf Hardwarebeschreibungen. Diese Arbeit stellt eine Analyse-Umgebung vor, die auf der Theorie der abstrakten Interpretation aufbaut und den Einsatz von klassischen Programmanalysen auf Hardwarebeschreibungssprachen erlaubt. Die Eignung des Systems, Teile der Ableitungsmethodik zu automatisieren, wird anhand einiger Analysen gezeigt. Experimentelle Ergebnisse zeigen die Anwendbarkeit der Methodik zur Ableitung von Zeitmodellen. Die Korrektheit der Analysen und der Analyse-Ergebnisse wird ebenfalls bewiesen

    Two concrete problems in timing analysis of embedded software

    Get PDF
    Master'sMASTER OF SCIENC

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization
    corecore