24 research outputs found

    Improving WCET Evaluation using Linear Relation Analysis

    Get PDF
    International audienceThe precision of a worst case execution time (WCET) evaluation tool on a given program is highly dependent on how the tool is able to detect and discard semantically infeasible executions of the program. In this paper, we propose to use the classical abstract interpretation-based method of linear relation analysis to discover and exploit relations between execution paths. For this purpose, we add auxiliary variables (counters) to the program to trace its execution paths. The results are easily incorporated in the classical workflow of a WCET evaluator, when the evaluator is based on the popular implicit path enumeration technique. We use existing tools-a WCET evaluator and a linear relation analyzer-to build and experiment a prototype implementation of this idea. * This work is supported by the French research fundation (ANR) as part of the W-SEPT project (ANR-12-INSE-0001

    Constant-Loop Dominators for Single-Path Code Optimization

    Get PDF
    Single-path code is a code generation technique specifically designed for real-time systems. It guarantees that programs execute the same instruction sequence regardless of runtime conditions. Single-path code uses loop bounds to ensure all loops iterate a fixed number of times equal to their upper loop bound. When the lower and upper bounds are equal, the loop must iterate the same number of times, which we call a constant loop. In this paper, we present the constant-loop dominance relation on control-flow graphs. It is a variation of the traditional dominance relation that considers constant loops to find basic blocks that are always executed the same number of times. Using this relation, we present an optimization that reduces the code needed to manage single-path code. Our evaluation shows significant performance improvements, with one example of up to 90%, with mostly minor effects on code size

    Software support to strengthen measurement-based timing analysis

    Get PDF
    In critical domains, the advent of high-performance (complex) hardware, used to provide the rising levels of guaranteed performance, complicates providing evidence of reliable software execution specially on aspects related to the timing dimension. Caches and multicores are two of the hardware features that have the potential to significantly reduce worst-case execution time (WCET) estimates, yet they pose new challenges on current-practice measurement-based timing analysis (MBTA) approaches. MBTA methods rely on some form of instrumentation, either at hardware or software level, of the target program or fragments thereof to collect execution-time measurement data. Instrumentation does affect the timing and functional behavior of a program, resulting in the so-called probe effect: leaving the instrumentation code in the final executable can negatively affect average performance and could not be even admissible under stringent industrial qualification and certification standards; removing it before operation jeopardizes the results of timing analysis as the WCET estimates on the instrumented version of the program cannot be valid any more due, for example, to the timing effects incurred by different cache alignments. Measurement-Based Probabilistic Timing Analysis (MBPTA) is a variant of MBTA that aims at increasing the confidence on WCET estimates. MBPTA aims at relieving the user from controlling hardware sources of jitter. MBPTA implicitly controls the impact of jittery resources on measurements captured at analysis. Some hardware resources are randomized so that their execution times at analysis vary according to a probabilistic execution time distribution that can be used to upperbound the latencies during operation and give a WCET prediction with a certain probability. In this Master Thesis we present our approach to mitigate the impact of instrumentation code on cache behavior by reducing the instrumentation overhead while at the same time preserving and consolidating the results of timing analysis. We further propose a technique for multilevel-cache multicores that combines deterministic and probabilistic jitter-bounding approaches to reliably handle variability in execution time generated by caches and the contention in accessing shared hardware resources

    Détermination de propriétés de flot de données pour améliorer les estimations de temps d'exécution pire-cas

    Get PDF
    La recherche d'une borne supérieure au temps d'exécution d'un programme est une partie essentielle du processus de vérification de systèmes temps-réel critiques. Les programmes de tels systèmes ont généralement des temps d'exécution variables et il est difficile, voire impossible, de prédire l'ensemble de ces temps possibles. Au lieu de cela, il est préférable de rechercher une approximation du temps d'exécution pire-cas ou Worst-Case Execution Time (WCET). Une propriété cruciale de cette approximation est qu'elle doit être sûre, c'est-à-dire qu'elle doit être garantie de majorer le WCET. Parce que nous cherchons à prouver que le système en question se termine en un temps raisonnable, une surapproximation est le seul type d'approximation acceptable. La garantie de cette propriété de sûreté ne saurait raisonnablement se faire sans analyse statique, un résultat se basant sur une série de tests ne pouvant être sûr sans un traitement exhaustif des cas d'exécution. De plus, en l'absence de certification du processus de compilation (et de transfert des propriétés vers le binaire), l'extraction de propriétés doit se faire directement sur le code binaire pour garantir leur fiabilité. Toutefois, cette approximation a un coût : un pessimisme - écart entre le WCET estimé et le WCET réel - important entraîne des surcoûts superflus de matériel pour que le système respecte les contraintes temporelles qui lui sont imposées. Il s'agit donc ensuite, tout en maintenant la garantie de sécurité de l'estimation du WCET, d'améliorer sa précision en réduisant cet écart de telle sorte qu'il soit suffisamment faible pour ne pas entraîner des coûts supplémentaires démesurés. Un des principaux facteurs de surestimation est la prise en compte de chemins d'exécution sémantiquement impossibles, dits infaisables, dans le calcul du WCET. Ceci est dû à l'analyse par énumération implicite des chemins ou Implicit Path Enumeration Technique (IPET) qui raisonne sur un surensemble des chemins d'exécution. Lorsque le chemin d'exécution pire-cas ou Worst-Case Execution Path (WCEP), correspondant au WCET estimé, porte sur un chemin infaisable, la précision de cette estimation est négativement affectée. Afin de parer à cette perte de précision, cette thèse propose une technique de détection de chemins infaisables, permettant l'amélioration de la précision des analyses statiques (dont celles pour le WCET) en les informant de l'infaisabilité de certains chemins du programme. Cette information est passée sous la forme de propriétés de flot de données formatées dans un langage d'annotation portable, FFX, permettant la communication des résultats de notre analyse de chemins infaisables vers d'autres analyses. Les méthodes présentées dans cette thèse sont inclues dans le framework OTAWA, développé au sein de l'équipe TRACES à l'IRIT. Elles usent elles-mêmes d'approximations pour représenter les états possibles de la machine en différents points du programme. Ce sont des abstractions maintenues au fil de l'analyse, et dont la validité est assurée par des outils de la théorie de l'interprétation abstraite. Ces abstractions permettent de représenter de manière efficace - mais sûre - les ensembles d'états pour une classe de chemins d'exécution jusqu'à un point du programme, et de détecter d'éventuels points du programme associés à un ensemble d'états possibles vide, traduisant un (ou plusieurs) chemin(s) infaisable(s). L'objectif de l'analyse développée, la détection de tels cas, est rendue possible par l'usage de solveurs SMT (Satisfiabilité Modulo des Théories). Ces solveurs permettent essentiellement de déterminer la satisfiabilité d'un ensemble de contraintes, déduites à partir des états abstraits construits. Lorsqu'un ensemble de contraintes, formé à partir d'une conjonction de prédicats, s'avère insatisfiable, aucune valuation des variables de la machine ne correspond à un cas d'exécution possible, et la famille de chemins associée est donc infaisable. L'efficacité de cette technique est soutenue par une série d'expérimentations sur divers suites de benchmarks, reconnues dans le domaine du WCET statique et/ou issues de cas réels de l'industrie. Des heuristiques sont configurées afin d'adoucir la complexité de l'analyse, en particulier pour les applications de plus grande taille. Les chemins infaisables détectés sont injectés sous la forme de contraintes de flot linéaires dans le système de Programmation Linéaire en Nombres Entiers ou Integer Linear Programming (ILP) pilotant le calcul final de l'analyse WCET d'OTAWA. Selon le programme analysé, cela peut résulter en une réduction du WCET estimé, et donc une amélioration de sa précision.The search for an upper bound of the execution time of a program is an essential part of the verification of real-time critical systems. The execution times of the programs of such systems generally vary a lot, and it is difficult, or impossible, to predict the range of the possible times. Instead, it is better to look for an approximation of the Worst-Case Execution Time (WCET). A crucial requirement of this estimate is that it must be safe, that is, it must be guaranteed above the real WCET. Because we are looking to prove that the system in question terminates reasonably quickly, an overapproximation is the only acceptable form of approximation. The guarantee of such a safety property could not sensibly be done without static analysis, as a result based on a battery of tests could not be safe without an exhaustive handling of test cases. Furthermore, in the absence of a certified compiler (and tech- nique for the safe transfer of properties to the binaries), the extraction of properties must be done directly on binary code to warrant their soundness. However, this approximation comes with a cost : an important pessimism, the gap between the estimated WCET and the real WCET, would lead to superfluous extra costs in hardware in order for the system to respect the imposed timing requirements. It is therefore important to improve the precision of the WCET by reducing this gap, while maintaining the safety property, as such that it is low enough to not lead to immoderate costs. A major cause of overestimation is the inclusion of semantically impossible paths, said infeasible paths, in the WCET computation. This is due to the use of the Implicit Path Enumeration Technique (IPET), which works on an superset of the possible execution paths. When the Worst-Case Execution Path (WCEP), corresponding to the estimated WCET, is infeasible, the precision of that estimation is negatively affected. In order to deal with this loss of precision, this thesis proposes an infeasible paths detection technique, enabling the improvement of the precision of static analyses (namely for WCET estimation) by notifying them of the infeasibility of some paths of the program. This information is then passed as data flow properties, formatted in the FFX portable annotation language, and allowing the communication of the results of our infeasible path analysis to other analyses. The methods hereafter presented are included in the OTAWA framework, developed in TRACES team at the IRIT lab. They themselves make use of approximations in order to represent the possible states of the machine in various program points. These approximations are abstractions maintained throughout the analysis, and which validity is ensured by abstract interpretation tools. They enable us to represent the set of states for a family of execution paths up to a given program point in an efficient - yet safe - way, and to detect the potential program points associated to an empty set of possible states, signalling one (or several) infeasible path(s). As the end goal of the developed analysis, the detection of such cases is made possible by the use of Satisfiability Modulo Theory (SMT) solvers. Those solvers are notably able to determine the satisfiability of a set of contraints, which we deduct from the abstract states. If a set of constraints, derived from a conjonction of predicates, is unsatisfiable, then there exists no valuation of the machine variables that match a possible execution case, and thus the associated infeasible paths are infeasible. The efficiency of this technique is asserted by a series of experiments on various benchmarks suites, some of which widely recognized in the domain of static WCET, some others derived from actual industrial applications. Heuristics are set up in order to soften the complexity of the analysis, especially for the larger applications. The detected infeasible paths are injected as Integer Linear Programming (ILP) linear data flow constraints in the final computation for the WCET estimation in OTAWA. Depending on the analysed program, this can result in a reduction of the estimated WCET, thereby improving its precision
    corecore