223 research outputs found

    OTAWA: An Open Toolbox for Adaptive WCET Analysis

    Get PDF
    International audienceThe analysis of worst-case execution times has become mandatory in the design of hard real-time systems: it is absolutely necessary to know an upper bound of the execution time of each task to determine a task schedule that insures that deadlines will all be met. The OTAWA toolbox presented in this paper has been designed to host algorithms resulting from research in the domain of WCET analysis so that they can be combined to compute tight WCET estimates. It features an abstraction layer that decouples the analyses from the target hardware and from the instruction set architecture, as well as a set of functionalities that facilitate the implementation of new approaches

    Impact of Code Compression on Estimated Worst-Case Execution Times

    Get PDF
    International audienceCode compression techniques might be useful to meet code size constraints in embedded systems. In the average case, the impact of code compression on the performance is double-edged: on one side, the number of accesses to memory hierarchy is reduced because several instructions are coded in a single word, and this is likely to reduce the execution time; on the other side, the decompression penalty increases the processing time of compressed instructions. Nevertheless, experimental results show that the execution time might be lowered by code compression. In this paper, our goal is to analyze the impact of code compression on the estimated Worst-Case Execution Time of critical tasks that must meet at the same time code size constraints and timing deadlines. Changes in the access patterns to the instruction cache are indeed likely to alter the accuracy of the cache analysis within the process of determining the WCET. Experimental results show that, besides reducing the code size, our code compression scheme also improves the WCET estimates in most of the cases

    A Generic Framework for Blackbox Components in WCET Computation

    Get PDF
    Validation of embedded hard real-time systems requires the computation of the Worst Case Execution Time (WCET). Although these systems make more and more use of Components Off The Shelf (COTS), the current WCET computation methods are usually applied to whole programs: these analysis methods require access to the whole system code, that is incompatible with the use of COTS. In this paper, after discussing the specific cases of the loop bounds estimation and the instruction cache analysis, we show in a generic way how static analysis involved in WCET computation can be pre-computed on COTS in order to obtain component partial results. These partial results can be distributed with the COTS, in order to compute the WCET in the context of a full application. We describe also the information items to include in the partial result, and we propose an XML exchange format to represent these data. Additionally, we show that the partial analysis enables us to reduce the analysis time while introducing very little pessimism

    WCET-aware prefetching of unlocked instruction caches: a technique for reconciling real-time guarantees and energy efficiency

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro TecnolĂłgico, Programa de PĂłs-Graduação em Engenharia de Automação e Sistemas, FlorianĂłpolis, 2015.A computação embarcada requer crescente vazĂŁo sob baixa potĂȘncia. Ela requer um aumento de eficiĂȘncia energĂ©tica quando se executam programas de crescente complexidade. Muitos sistemas embarcados sĂŁo tambĂ©m sistemas de tempo real, cuja correção temporal precisa ser garantida atravĂ©s de anĂĄlise de escalonabilidade, a qual costuma assumir que o WCET de uma tarefa Ă© conhecido em tempo de projeto. Como resultado da crescente complexidade do software, uma quantidade significativa de energia Ă© gasta ao se prover instruçÔes atravĂ©s da hierarquia de memĂłria. Como a cache de instruçÔes consome cerca de 40% da energia gasta em um processador embarcado e afeta a energia consumida em memĂłria principal, ela se torna um relevante alvo para otimização. Entretanto, como ela afeta substancialmente o WCET, o comportamento da cache precisa ser restrito via  cache locking ou previsto via anĂĄlise de WCET. Para obter eficiĂȘncia energĂ©tica sob restriçÔes de tempo real, Ă© preciso estender a consciĂȘncia que o compilador tem da plataforma de hardware. Entretanto, compiladores para tempo real ignoram a energia, embora determinem rapidamente limites superiores para o WCET, enquanto compiladores para sistemas embarcados estimem com precisĂŁo a energia, mas gastem muito tempo em  profiling . Por isso, esta tese propĂ”e um mĂ©todo unificado para estimar a energia gasta em memĂłria, o qual Ă© baseado em Interpretação Abstrata, exatamente o mesmo substrato matemĂĄtico usado para a anĂĄlise de WCET em caches. As estimativas mostram derivadas que sĂŁo tĂŁo precisas quanto as obtidas via  profiling , mas sĂŁo computadas 1000 vezes mais rĂĄpido, sendo apropriadas para induzir otimização de cĂłdigo atravĂ©s de melhoria iterativa. Como  cache locking troca eficiĂȘncia energĂ©tica por previsibilidade, esta tese propĂ”e uma nova otimização de cĂłdigo, baseada em prĂ©-carga por software, a qual reduz a taxa de faltas de caches de instruçÔes e, provadamente, nĂŁo aumenta o WCET. A otimização proposta Ă© comparada com o estado-da-arte em  cache locking parcial para 37 programas do  Malardalen WCET benchmark para 36 configuraçÔes de cache e duas tecnologias distintas (2664 casos de uso). Em mĂ©dia, para obter uma melhoria de 68% no WCET,  cache locking parcial requer 8% mais energia. Por outro lado, a prĂ©-carga por software diminui o consumo de energia em 11% enquanto melhora em 15% o WCET, reconciliando assim eficiĂȘncia energĂ©tica e garantias de tempo real.Abstract : Embedded computing requires increasing throughput at low power budgets. It asks for growing energy efficiency when executing programs of rising complexity. Many embedded systems are also real-time systems, whose temporal correctness is asserted through schedulability analysis, which often assumes that the WCET of each task is known at design-time. As a result of the growing software complexity, a significant amount of energy is spent in supplying instructions through the memory hierarchy. Since an instruction cache consumes around 40% of an embedded processor s energy and affects the energy spent in main memory, it becomes a relevant optimization target. However, since it largely impacts the WCET, cache behavior must be either constrained via cache locking or predicted by WCET analysis. To achieve energy efficiency under real-time constraints, a compiler must have extended awareness of the hardware platform. However, real-time compilers ignore energy, although they quickly determine bounds for WCET, whereas embedded compilers accurately estimate energy but require time-consuming profiling. That is why this thesis proposes a unifying method to estimate memory energy consumption that is based on Abstract Interpretation, the very same mathematical framework employed for the WCET analysis of caches. The estimates exhibit derivatives that are as accurate as those obtained by profiling, but are computed 1000 times faster, being suitable for driving code optimization through iterative improvement. Since cache locking gives up energy efficiency for predictability, this thesis proposes a novel code optimization, based on software prefetching, which reduces miss rate of unlocked instruction caches and, provenly, does not increase the WCET. The proposed optimization is compared with a state-of-the-art partial cache locking technique for the 37 programs of the Malardalen WCET benchmarks under 36 cache configurations and two distinct target technologies (2664 use cases). On average, to achieve an improvement of 68% in the WCET, partial cache locking required 8% more energy. On the other hand, software prefetching decreased the energy consumption by 11% while leading to an improvement of 15% in the WCET, thereby reconciling energy efficiency and real-time guarantees

    A framework to experiment optimizations for real-time and embedded software

    Get PDF
    Typical constraints on embedded systems include code size limits, upper bounds on energy consumption and hard or soft deadlines. To meet these requirements, it may be necessary to improve the software by applying various kinds of transformations like compiler optimizations, specific mapping of code and data in the available memories, code compression, etc. However, a transformation that aims at improving the software with respect to a given criterion might engender side effects on other criteria and these effects must be carefully analyzed. For this purpose, we have developed a common framework that makes it possible to experiment various code transfor-mations and to evaluate their impact of various criteria. This work has been carried out within the French ANR MORE project.Comment: International Conference on Embedded Real Time Software and Systems (ERTS2), Toulouse : France (2010

    A Framework to Quantify the Overestimations of Static WCET Analysis

    Get PDF
    International audienceTo reduce complexity while computing an upper bound on the worst-case execution time, static WCET analysis performs over-approximations. This feeds the general feeling that static WCET estimations can be far above the real WCET. This feeling is strengthened when these estimations are compared to measured execution times: generally, it is very unlikely to capture the worstcase from observations, then the difference between the highest watermark and the proven WCET upper bound might be considerable. In this paper, we introduce a framework to quantify the possible overestimation on WCET upper bounds obtained by static analysis. The objective is to derive a lower bound on the WCET to complement the upper bound

    METAMOC: Modular Execution Time Analysis using Model Checking

    Get PDF
    Safe and tight worst-case execution times (WCETs) are important when scheduling hard real-time systems. This paper presents METAMOC, a path-based, modular method, based on model checking and static analysis, that determines safe and tight WCETs for programs running on platforms fea-turing caching and pipelining. The method works by constructing a UPPAAL model of the program being analysed and annotating the model with information from an inter-procedural value analysis. The program model is then combined with a model of the hardware platform, and model checked for the WCET. Through support for the platforms ARM7, ARM9 and ATMEL AVR 8-bit the modularity and retargetability of the method is demonstrated, as only the pipeline needs to be remodelled. Mod-elling the hardware is performed in a state-of-the-art graphical modeling environment. Experiments on the Mälardalen WCET benchmark programs show that taking caching into account yields much tighter WCETs, and that METAMOC is a fast and versatile approach for WCET analysis. 1

    Improving the WCET computation time by IPET using control flow graph partitioning

    Get PDF
    Implicit Path Enumeration Technique (IPET) is currently largely used to compute Worst Case Execution Time (WCET) by modeling control flow and architecture using integer linear programming (ILP). As precise architecture effects requires a lot of constraints, the super-linear complexity of the ILP solver makes computation times bigger and bigger. In this paper, we propose to split the control flow of the program into smaller parts where a local WCET can be computed faster - as the resulting ILP system is smaller - and to combine these local results to get the overall WCET without loss of precision. The experimentation in our tool OTAWA with lp_solve solver has shown an average computation improvement of 6.5 times

    Accurate analysis of memory latencies for WCET estimation

    Get PDF
    International audienceThese last years, many researchers have proposed solutions to estimate the Worst-Case Execution Time of a critical application when it is run on modern hardware. Several schemes commonly implemented to improve performance have been considered so far in the context of static WCET analysis: pipelines, instruction caches, dynamic branch predictors, execution cores supporting out-of-order execution, etc. Comparatively, components that are external to the processor have received lesser attention. In particular, the latency of memory accesses is generally considered as a fixed value. Now, modern DRAM devices support the open page policy that reduces the memory latency when successive memory accesses address the same memory row. This scheme, also known as row buffer, induces variable memory latencies, depending on whether the access hits or misses in the row buffer. In this paper, we propose an algorithm to take the open page policy into account when estimating WCETs for a processor with an instruction cache. Experimental results show that WCET estimates are refined thanks to the consideration of tighter memory latencies instead of pessimistic values
    • 

    corecore