28 research outputs found

    Formal Executable Models for Automatic Detection of Timing Anomalies

    Get PDF
    A timing anomaly is a counterintuitive timing behavior in the sense that a local fast execution slows down an overall global execution. The presence of such behaviors is inconvenient for the WCET analysis which requires, via abstractions, a certain monotony property to compute safe bounds. In this paper we explore how to systematically execute a previously proposed formal definition of timing anomalies. We ground our work on formal designs of architecture models upon which we employ guided model checking techniques. Our goal is towards the automatic detection of timing anomalies in given computer architecture designs

    Program Semantics in Model-Based WCET Analysis: A State of the Art Perspective

    Get PDF
    Advanced design techniques of safety-critical applications use specialized development model based methods. Under this setting, the application exists at several levels of description, as the result of a sequence of transformations. On the positive side, the application is developed in a systematic way, while on the negative side, its high-level semantics may be obfuscated when represented at the lower levels. The application should provide certain functional and non-functional guarantees. When the application is a hard real-time program, such guarantees could be deadlines, thus making the computation of worst-case execution time (WCET) bounds mandatory. This paper overviews, in the context of WCET analysis, what are the existing techniques to extract, express and exploit the program semantics along the model-based development workflow

    Improving WCET Evaluation using Linear Relation Analysis

    Get PDF
    International audienceThe precision of a worst case execution time (WCET) evaluation tool on a given program is highly dependent on how the tool is able to detect and discard semantically infeasible executions of the program. In this paper, we propose to use the classical abstract interpretation-based method of linear relation analysis to discover and exploit relations between execution paths. For this purpose, we add auxiliary variables (counters) to the program to trace its execution paths. The results are easily incorporated in the classical workflow of a WCET evaluator, when the evaluator is based on the popular implicit path enumeration technique. We use existing tools-a WCET evaluator and a linear relation analyzer-to build and experiment a prototype implementation of this idea. * This work is supported by the French research fundation (ANR) as part of the W-SEPT project (ANR-12-INSE-0001

    Fault-Resistant Partitioning of Secure CPUs for System Co-Verification against Faults

    Get PDF
    To assess the robustness of CPU-based systems against fault injection attacks, it is necessary to analyze the consequences of the fault propagation resulting from the intricate interaction between the software and the processor. However, current formal methodologies that combine both hardware and software aspects experience scalability issues, primarily due to the use of bounded verification techniques. This work formalizes the notion of kk-fault resistant partitioning as an inductive solution to this fault propagation problem when assessing redundancy-based hardware countermeasures to fault injections. Proven security guarantees can then reduce the remaining hardware attack surface to consider in a combined analysis with the software, enabling a full co-verification methodology. As a result, we formally verify the robustness of the hardware lockstep countermeasure of the OpenTitan secure element to single bit-flip injections. Besides that, we demonstrate that previously intractable problems, such as analyzing the robustness of OpenTitan running a secure boot process, can now be solved by a co-verification methodology that leverages a kk-fault resistant partitioning. We also report a potential exploitation of the register file vulnerability in two other software use cases. Finally, we provide a security fix for the register file, verify its robustness, and integrate it into the OpenTitan project

    How to Compute Worst-Case Execution Time by Optimization Modulo Theory and a Clever Encoding of Program Semantics

    No full text
    International audienceIn systems with hard real-time constraints, it is necessary to compute upper bounds on the worst-case execution time (WCET) of programs; the closer the bound to the real WCET, the better. This is especially the case of synchronous reactive control loops with a fixed clock; the WCET of the loop body must not exceed the clock period. We compute the WCET (or at least a close upper bound thereof) as the solution of an optimization modulo theory problem that takes into account the semantics of the program, in contrast to other methods that compute the longest path whether or not it is feasible according to these semantics. Optimization modulo theory extends satisfiability modulo theory (SMT) to maximization problems. Immediate encodings of WCET problems into SMT yield formulas intractable for all current production-grade solvers; this is inherent to the DPLL(T) approach to SMT implemented in these solvers. By conjoining some appropriate "cuts" to these formulas, we considerably reduce the computation time of the SMT-solver. We experimented our approach on a variety of control programs, using the OTAWA analyzer both as baseline and as underlying microarchitectural analysis for our analysis, and show notable improvement on the WCET bound on a variety of benchmarks and control programs

    A Coq Framework for More Trustworthy DRAM Controllers

    No full text
    International audienc

    Reproducibility and representativity: mandatory properties for the compositionality of measurement-based {WCET} estimation approaches

    No full text
    International audienceThe increased number of systems consisting of multiple interacting components imposes the evolution of timing analyses towards methods able to estimate the timing behavior of an entire system by aggregating timings bounds of its components. In this paper we propose the first discussion on the properties required by measurement-based timing analyses to ensure such compositionality. We identify the properties of reproducibility and representativity as necessary conditions to ensure the convergence of any measurement protocol allowing a compositional measurement-based timing analysis

    From the Standards to Silicon : Formally Proved Memory Controllers

    No full text
    International audienceRecent research in both academia and industry has successfully used deductive verification to design hardware and prove its correctness. While tools and languages to write formally proved hardware have been proposed, applications and use cases are often overlooked. In this work, we focus on Dynamic Random Access Memories (DRAM) controllers and the DRAM itself – which has its expected temporal and functional behaviours described in the standards written by the Joint Electron Device Engineering Council (JEDEC). Concretely, we associate an existing Coq DRAM controller framework – which can be used to write DRAM scheduling algorithms that comply with a variety of correctness criteria – to a back-end system that generates proved logically equivalent hardware. This makes it possible to simultaneously enjoy the trustworthiness provided by the Coq framework and use the generated synthesizable hardware in real systems. We validate the approach by using the generated code as a plug-in replacement in an existing DDR4 controller implementation, which includes a host interface (AXI), a physical layer (PHY) from Xilinx, and a model of a memory part Micron MT40A1G8WE-075E:D. We simulate and synthesise the full system
    corecore