17,886 research outputs found

    Execution time distributions in embedded safety-critical systems using extreme value theory

    Get PDF
    Several techniques have been proposed to upper-bound the worst-case execution time behaviour of programs in the domain of critical real-time embedded systems. These computing systems have strong requirements regarding the guarantees that the longest execution time a program can take is bounded. Some of those techniques use extreme value theory (EVT) as their main prediction method. In this paper, EVT is used to estimate a high quantile for different types of execution time distributions observed for a set of representative programs for the analysis of automotive applications. A major challenge appears when the dataset seems to be heavy tailed, because this contradicts the previous assumption of embedded safety-critical systems. A methodology based on the coefficient of variation is introduced for a threshold selection algorithm to determine the point above which the distribution can be considered generalised Pareto distribution. This methodology also provides an estimation of the extreme value index and high quantile estimates. We have applied these methods to execution time observations collected from the execution of 16 representative automotive benchmarks to predict an upper-bound to the maximum execution time of this program. Several comparisons with alternative approaches are discussed.The research leading to these results has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under the PROXIMA Project (grant agreement 611085). This study was also partially supported by the Spanish Ministry of Science and Innovation under grants MTM2012-31118 (2013-2015) and TIN2015-65316-P. Jaume Abella is partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013- 14717.Peer ReviewedPostprint (author's final draft

    Toward Contention Analysis for Parallel Executing Real-Time Tasks

    Get PDF
    In measurement-based probabilistic timing analysis, the execution conditions imposed to tasks as measurement scenarios, have a strong impact to the worst-case execution time estimates. The scenarios and their effects on the task execution behavior have to be deeply investigated. The aim has to be to identify and to guarantee the scenarios that lead to the maximum measurements, i.e. the worst-case scenarios, and use them to assure the worst-case execution time estimates. We propose a contention analysis in order to identify the worst contentions that a task can suffer from concurrent executions. The work focuses on the interferences on shared resources (cache memories and memory buses) from parallel executions in multi-core real-time systems. Our approach consists of searching for possible task contenders for parallel executions, modeling their contentiousness, and classifying the measurement scenarios accordingly. We identify the most contentious ones and their worst-case effects on task execution times. The measurement-based probabilistic timing analysis is then used to verify the analysis proposed, qualify the scenarios with contentiousness, and compare them. A parallel execution simulator for multi-core real-time system is developed and used for validating our framework. The framework applies heuristics and assumptions that simplify the system behavior. It represents a first step for developing a complete approach which would be able to guarantee the worst-case behavior

    Software timing analysis for complex hardware with survivability and risk analysis

    Get PDF
    The increasing automation of safety-critical real-time systems, such as those in cars and planes, leads, to more complex and performance-demanding on-board software and the subsequent adoption of multicores and accelerators. This causes software's execution time dispersion to increase due to variable-latency resources such as caches, NoCs, advanced memory controllers and the like. Statistical analysis has been proposed to model the Worst-Case Execution Time (WCET) of software running such complex systems by providing reliable probabilistic WCET (pWCET) estimates. However, statistical models used so far, which are based on risk analysis, are overly pessimistic by construction. In this paper we prove that statistical survivability and risk analyses are equivalent in terms of tail analysis and, building upon survivability analysis theory, we show that Weibull tail models can be used to estimate pWCET distributions reliably and tightly. In particular, our methodology proves the correctness-by-construction of the approach, and our evaluation provides evidence about the tightness of the pWCET estimates obtained, which allow decreasing them reliably by 40% for a railway case study w.r.t. state-of-the-art exponential tails.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02- 06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    An approach for detecting power peaks during testing and breaking systematic pathological behavior

    Get PDF
    The verification and validation process of embedded critical systems requires providing evidence of their functional correctness and also that their non-functional behavior stays within limits. In this work, we focus on power peaks, which may cause voltage droops and thus, challenge performance to preserve correct operation upon droops. In this line, the use of complex software and hardware in critical embedded systems jeopardizes the confidence that can be placed on the tests carried out during the campaigns performed at analysis. This is so because it is unknown whether tests have triggered the highest power peaks that can occur during operation and whether any such peak can occur systematically. In this paper we propose the use of randomization, already used for timing analysis of real-time systems, as an enabler to guarantee that (1) tests expose those peaks that can arise during operation and (2) peaks cannot occur systematically inadvertently.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773), and the HiPEAC Network of Excellence. MINECO partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft

    Modelling and predicting extreme behavior in critical real-time systems with advanced statistics

    Get PDF
    In the last decade, the market for Critical Real-Time Embedded Systems (CRTES) has increased significantly. According to Global Markets Insight [1], the embedded systems market will reach a total size of US $258 billion in 2023 at an average annual growth rate of 5.6%. Their extensive use in domains such as automotive, aerospace and avionics industry demands ever increasing performance requirements [2]. To satisfy those requirements the CRTES industry has implemented more complex processors, a higher number of memory modules, and accelerators units. Thus the demanding performance requirements have led to a merge of CRTES with High Performance systems. All of these industries work within the framework of CRTES, which puts several restrictions in their design and implementation. Real Time systems require to deliver a response to an event in a restricted time frame or deadline. Real-time systems where missing a deadline provokes a total system failure (hard real-time systems) need satisfy certain guidelines and standards to show that they comply with test for functional and timing behaviour. These standards change depending on the industry, for instance the automotive industry follows ISO 26262 [3] and the aerospace industry follows DO-178C [4]. Researches have developed techniques to analyse the timing correctness in a CRTES. Here, we will expose how they perform on the estimation of the Worst-Case Execution Time (WCET). The WCET is the maximum time that a particular software takes to execute. Estimating its value is crucial from a timing analysis point of view. However there is still not a generalised precise and safe method to produce estimates of WCET [5]. In the CRTES the estimations of the WCET cannot be lower than the true WCET, as they are deemed unsafe; but they cannot exceed it by a significant margin, as they will be deemed pessimistic and impractical
    • …
    corecore