19 research outputs found

    Software Structure and WCET Predictability

    Get PDF
    Being able to compute worst-case execution time bounds for tasks of an embedded software system with hard real-time constraints is crucial to ensure the correct (timing) behavior of the overall system. Any means to increase the (static) time predictability of the embedded software are of high interest -- especially due to the ever-growing complexity of such software systems. In this paper we study existing coding proposals and guidelines, such as MISRA-C, and investigate whether they simplify static timing analysis. Furthermore, we investigate how additional knowledge, such as design-level information, can further aid in this process

    Timing analysis of embedded software for speculative processors

    Get PDF

    Software performance estimation strategies in a system-level design tool

    Get PDF
    High-level cost and performance estimation, coupled with a fast hardware/software co-simulation framework, is a key enabler to a fast embedded system design cycle. Unfortunately, the problem of deriving such estimates without a detailed implementation available is difficult.In this paper we describe two approaches to solve software cost and performance estimation problem, and how they are used in an embedded system design environment. A source-based approach uses compilation onto a virtual instruction set, and allows one to quickly obtain estimates without the need for a compiler for the target processor. An object-based approach translates the assembler generated by the target compiler to “assembler-level,” functionally equivalent t C. In both cases the code is annotated with timing and other execution related information (e.g., estimated memory accesses) and is used as a precise, yet fast, software simulation model. We contrast the precision and speed of these two techniques comparing them with those obtainable by a state-of-the-art cycle-based processor model

    Performance Estimation for Task Graphs Combining Sequential Path Profiling and Control Dependence Regions

    Get PDF
    The speed-up estimation of parallelized code is crucial to efficiently compare different parallelization techniques or task graph transformations. Unfortunately, most of the time, during the parallelization of a specification, the information that can be extracted by profiling the corresponding sequential code (e.g. the most executed paths) are not properly taken into account. In particular, correlating sequential path profiling with the corresponding parallelized code can help in the identification of code hot spots, opening new possibilities for automatic parallelization. For this reason, starting from a well-known profiling technique, the Efficient Path Profiling, we propose a methodology that estimates the speed-up of a parallelized specification, just using the corresponding hierarchical task graph representation and the information coming from the dynamic profiling of the initial sequential specification. Experimental results show that the proposed solution outperforms existing approaches

    VESTIM : Une méthode d'estimation de performances pour une implémentation optimisée d'applications sur processeurs de traitement du signal

    Get PDF
    - Les compilateurs C pour processeurs DSP actuellement disponibles sont généralement incapables de générer un code assembleur respectant les contraintes temps réel fortes des systèmes embarqués. D'autre part, programmer un DSP directement en assembleur est une situation de plus en plus inacceptable. Notre approche se propose de fournir des estimations logicielles qui aident le programmeur au développement rapide d'applications sur DSP. Le programmeur dispose d'une évaluation des performances du code généré par le compilateur ainsi que d'une estimation d'un code assembleur optimisé. Nous comparons ces estimations avec des mesures de performances dans le pire cas obtenues en utilisant une approche statique

    Performance Estimation of Task Graphs Based on Path Profiling

    Get PDF
    Correctly estimating the speed-up of a parallel embedded application is crucial to efficiently compare different parallelization techniques, task graph transformations or mapping and scheduling solutions. Unfortunately, especially in case of control-dominated applications, task correlations may heavily affect the execution time of the solutions and usually this is not properly taken into account during performance analysis. We propose a methodology that combines a single profiling of the initial sequential specification with different decisions in terms of partitioning, mapping, and scheduling in order to better estimate the actual speed-up of these solutions. We validated our approach on a multi-processor simulation platform: experimental results show that our methodology, effectively identifying the correlations among tasks, significantly outperforms existing approaches for speed-up estimation. Indeed, we obtained an absolute error less than 5 % in average, even when compiling the code with different optimization levels

    Collecting signatures to model latency tolerance in high-level simulations of microthreaded cores

    Get PDF
    The current many-core architectures are generally evaluated by a detailed emulation with a cycle-accurate simulation of the execution time. However this detailed simulation of the architecture makes the evaluation of large programs very slow. Since the focus in many-core architecture is shifting from the performance of the individual core to the overall behavior of chip, high-level simulations are becoming neces- sary, which evaluate the same architecture at less detailed level and allow the designer to make quick and reasonably accurate design decisions. We have developed a high-level simulator for the design space exploration of the Microgrid, which is a many-core architecture comprised of many fine- grained multi-threaded cores. This simulator allows us to investigate mapping and scheduling strategies of families (i.e. groups of threads) in developing an operating environ- ment for the Microgrid. The previous method to evaluate the workload counted in basic blocks was inaccurate. The key problem is that with many concurrent threads the la- tency of certain instructions are hidden because of the multi- threaded nature of the core. This paper presents a technique to manage the execution time of different types of instruc- tions with thread concurrency. We believe to achieve high accuracy in evaluating programs in the high-level simulator

    Efficient Estimation of Tighter Bounds for Worst Case Execution Time of Programs

    Get PDF
    In this paper, we will present a framework for the statistical analysis of the execution time of program units. We will show alternative methods for computing the distribution of the execution times and provide justification for the use of each of the methods presented. We will estimate the worst-case execution time (WCET) of the program units using several methods and compare the results of these methods. We will also present a new method for estimating the WCET, based on the theory of extreme value distributions

    Static Timing Analysis Based Transformations of Super-Complex Instruction Set Hardware Functions

    Get PDF
    Application specific hardware implementations are an increasingly popular way of reducing execution time and power consumption in embedded systems. This application specific hardware typically consumes a small fraction of the execution time and power consumption that the equivalent software code would require. Modern electronic design automation (EDA) tools can be used to apply a variety of transformations to hardware blocks in an effort to achieve additional performance and power savings. A number of such transformations require a tool with knowledge of the designs' timing characteristics. This thesis describes a static timing analyzer and two timing analysis based design automation tools. The static timing analyzer estimates the worst-case timing characteristics of a hardware data flow graph. These hardware data flow graphs are intermediate representations generated within a C to VHDL hardware acceleration compiler. Two EDA tools were then developed which utilize static timing analysis. An automated pipelining tool was developed to increase the throughput of large blocks of combinational logic generated by the hardware acceleration compiler. Another tool was designed in an attempt to mitigate power consumption resulting from extraneous combinational switching. By inserting special signal buffers, known as delay elements, with preselected propagation delays, combinational functional units can be kept inactive until their inputs have stabilized. The hardware descriptions generated by both tools were synthesized, simulated, and power profiled using existing commercial EDA tools. The results show that pipelining leads to an average performance increase of 3.3x, while delay elements saved between 25% and 33% of the power consumption when tested on a set of signal and image processing benchmarks

    On the limits of probabilistic timing analysis

    Get PDF
    Over the last years, we are witnessing the steady and rapid growth of Critica! Real-Time Embedded Systems (CRTES) industries, such as automotive and aerospace. Many of the increasingly-complex CRTES' functionalities that are currently implemented with mechanical means are moving towards to an electromechanical implementation controlled by critica! software. This trend results in a two-fold consequence. First, the size and complexity of critical-software increases in every new embedded product. And second, high-performance hardware features like caches are more frequently used in real-time processors. The increase in complexity of CRTES challenges the validation and verification process, a necessary step to certify that the system is safe for deployment. Timing validation and verification includes the computation of the Worst-Case Execution Time (WCET) estimates, which need to be trustworthy and tight. Traditional timing analysis are challenged by the use of complex hardware/software, resulting in low-quality WCET estimates, which tend to add significant pessimism to guarantee estimates' trustworthiness. This calls for new solutions that help tightening WCET estimates in a safe manner. In this Thesis, we investigate the novel Measurement-Based Probabilistic Timing Analysis (MBPTA), which in its original version already shows potential to deliver trustworthy and tight WCETs for tasks running on complex systems. First, we propose a methodology to assess and ensure that ali cache memory layouts, which can significantly impact WCET, have been adequately factored in the WCET estimation process. Second, we provide a solution to achieve simultaneously cache representativeness and full path coverage. This solution provides evidence proving that WCET estimates obtained are valid for ali program execution paths regardless of how code and data are laid out in the cache. Lastly, we analyse and expose the main misconceptions and pitfalls that can prevent a sound application of WCET analysis based on extreme value theory, which is used as part of MBPTA.En los últimos años, se ha podido observar un crecimiento rápido y sostenido de la industria de los sistemas embebidos críticos de tiempo real (abreviado en inglés CRTES}, como por ejemplo la industria aeronáutica o la automovilística. En un futuro cercano, muchas de las funcionalidades complejas que actualmente se están implementando a través de sistemas mecánicos en los CRTES pasarán a ser controladas por software crítico. Esta tendencia tiene dos consecuencias claras. La primera, el tamaño y la complejidad del software se incrementará en cada nuevo producto embebido que se lance al mercado. La segunda, las técnicas hardware destinadas a alto rendimiento (por ejemplo, memorias caché) serán usadas más frecuentemente en los procesadores de tiempo real. El incremento en la complejidad de los CRTES impone un reto en los procesos de validación y verificación de los procesadores, un paso imprescindible para certificar que los sistemas se pueden comercializar de forma segura. La validación y verificación del tiempo de ejecución incluye la estimación del tiempo de ejecución en el peor caso (abreviado en inglés WCET}, que debe ser precisa y certera. Desafortunadamente, los procesos tradicionales para analizar el tiempo de ejecución tienen problemas para analizar las complejas combinaciones entre el software y el hardware, produciendo estimaciones del WCET de mala calidad y conservadoras. Para superar dicha limitación, es necesario que florezcan nuevas técnicas que ayuden a proporcionar WCET más precisos de forma segura y automatizada. En esta Tesis se profundiza en la investigación referente al análisis probabilístico de tiempo de ejecución basado en medidas (abreviado en inglés MBPTA), cuyas primeras implementaciones muestran potencial para obtener un WCET preciso y certero en tareas ejecutadas en sistemas complejos. Primero, se propone una metodología para certificar que todas las distribuciones de la memoria caché, una de las estructuras más complejas de los CRTES, han sido contabilizadas adecuadamente durante el proceso de estimación del WCET. Segundo, se expone una solución para conseguir a la vez representatividad en la memoria caché y cobertura total en caminos críticos del programa. Dicha solución garantiza que la estimación WCET obtenida es válida para todos los caminos de ejecución, independientemente de como el código y los datos se guardan en la memoria caché. Finalmente, se analizan y discuten los mayores malentendidos y obstáculos que pueden prevenir la aplicabilidad del análisis de WCET basado en la teoría de valores extremos, la cual forma parte del MBPTA.Postprint (published version
    corecore