24 research outputs found

    Measurement-Based Worst-Case Execution Time Estimation Using the Coefficient of Variation

    Get PDF
    Extreme Value Theory (EVT) has been historically used in domains such as finance and hydrology to model worst-case events (e.g., major stock market incidences). EVT takes as input a sample of the distribution of the variable to model and fits the tail of that sample to either the Generalised Extreme Value (GEV) or the Generalised Pareto Distribution (GPD). Recently, EVT has become popular in real-time systems to derive worst-case execution time (WCET) estimates of programs. However, the application of EVT is not straightforward and requires a detailed analysis of, and customisation for, the particular problem at hand. In this article, we tailor the application of EVT to timing analysis. To that end, (1) we analyse the response time of different hardware resources (e.g., cache memories) and identify those that may lead to radically different types of execution time distributions. (2) We show that one of these distributions, known as mixture distribution, causes problems in the use of EVT. In particular, mixture distributions challenge not only properly selecting GEV/GPD parameters (i.e., location, scale and shape) but also determining the size of the sample to ensure that enough tail values are passed to EVT and that only tail values are used by EVT to fit GEV/GPD. Failing to select these parameters has a negative impact on the quality of the derived WCET estimates. We tackle these problems, by (3) proposing Measurement-Based Probabilistic Timing Analysis using the Coefficient of Variation (MBPTA-CV), a new mixture-distribution aware, WCET-suited MBPTA method that builds on recent EVT developments in other fields (e.g., finance) to automatically select the distribution parameters that best fit the maxima of the observed execution times. Our results on a simulation environment and a real board show that MBPTA-CV produces high-quality WCET estimates.The research leading to these results has received funding from the European Community’s FP7 [FP7/2007- 2013] under the PROXIMA Project (www.proxima-project.eu), grant 611085. This work has also been par- tially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella was partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Modelling and predicting extreme behavior in critical real-time systems with advanced statistics

    Get PDF
    In the last decade, the market for Critical Real-Time Embedded Systems (CRTES) has increased significantly. According to Global Markets Insight [1], the embedded systems market will reach a total size of US $258 billion in 2023 at an average annual growth rate of 5.6%. Their extensive use in domains such as automotive, aerospace and avionics industry demands ever increasing performance requirements [2]. To satisfy those requirements the CRTES industry has implemented more complex processors, a higher number of memory modules, and accelerators units. Thus the demanding performance requirements have led to a merge of CRTES with High Performance systems. All of these industries work within the framework of CRTES, which puts several restrictions in their design and implementation. Real Time systems require to deliver a response to an event in a restricted time frame or deadline. Real-time systems where missing a deadline provokes a total system failure (hard real-time systems) need satisfy certain guidelines and standards to show that they comply with test for functional and timing behaviour. These standards change depending on the industry, for instance the automotive industry follows ISO 26262 [3] and the aerospace industry follows DO-178C [4]. Researches have developed techniques to analyse the timing correctness in a CRTES. Here, we will expose how they perform on the estimation of the Worst-Case Execution Time (WCET). The WCET is the maximum time that a particular software takes to execute. Estimating its value is crucial from a timing analysis point of view. However there is still not a generalised precise and safe method to produce estimates of WCET [5]. In the CRTES the estimations of the WCET cannot be lower than the true WCET, as they are deemed unsafe; but they cannot exceed it by a significant margin, as they will be deemed pessimistic and impractical

    Software timing analysis for complex hardware with survivability and risk analysis

    Get PDF
    The increasing automation of safety-critical real-time systems, such as those in cars and planes, leads, to more complex and performance-demanding on-board software and the subsequent adoption of multicores and accelerators. This causes software's execution time dispersion to increase due to variable-latency resources such as caches, NoCs, advanced memory controllers and the like. Statistical analysis has been proposed to model the Worst-Case Execution Time (WCET) of software running such complex systems by providing reliable probabilistic WCET (pWCET) estimates. However, statistical models used so far, which are based on risk analysis, are overly pessimistic by construction. In this paper we prove that statistical survivability and risk analyses are equivalent in terms of tail analysis and, building upon survivability analysis theory, we show that Weibull tail models can be used to estimate pWCET distributions reliably and tightly. In particular, our methodology proves the correctness-by-construction of the approach, and our evaluation provides evidence about the tightness of the pWCET estimates obtained, which allow decreasing them reliably by 40% for a railway case study w.r.t. state-of-the-art exponential tails.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02- 06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Boosting Guaranteed Performance in Wormhole NoCs with Probabilistic Timing Analysis

    Get PDF
    Wormhole-based NoCs (wNoCs) are widely accepted in high-performance domains as the most appropriate solution to interconnect an increasing number of cores in the chip. However, wNoCs suitability in the context of critical real-time applications has not been demonstrated yet. In this paper, in the context of probabilistic timing analysis (PTA), we propose a PTA-compatible wNoC design that provides tight time-composable contention bounds. The proposed wNoC design builds on PTA ability to reason in probabilistic terms about hardware events impacting execution time (e.g. wNoC contention), discarding those sequences of events occurring with a negligible low probability. This allows our wNoC design to deliver improved guaranteed performance. ur results show that WCET estimates of applications running on top of probabilistic wNoCs are reduced by 40% and 75% on average for 4x4 and 6x6 wNoC setups respectively when compared against deterministic wNoCs.This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Mladen Slijepcevic is funded by the Obra Social Fundación la Caixa under grant Doctorado “la Caixa” - Severo Ochoa. Carles Hernández is jointly funded by the Spanish Ministry of Economy and Competitiveness (MINECO) and FEDER funds through grant TIN2014-60404-JIN. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Modeling the impact of process variations in worst-case energy consumption estimation

    Get PDF
    The advent of autonomous power-limited systems poses a new challenge for system verification. Powerful processors needed to enable autonomous operation, are typically power-hungry, jeopardizing battery duration. Therefore, guaranteeing a given battery duration requires worst-case energy consumption (WCEC) estimation for tasks running on those systems. Unfortunately, processor energy and power can suffer significant variation across different units due to process variation (PV), i.e. variability in the electrical properties of transistors and wires due to imperfect manufacturing, which challenges existing WCEC estimation methods for applications. In this paper, we propose a statistical modeling approach to capture PV impact on applications energy and a methodology to compute their WCEC capturing PV, as required to deploy portable critical devices.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773). MINECO partially supported Jaume Abella under Ramon y Cajal fellowship RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    An approach for detecting power peaks during testing and breaking systematic pathological behavior

    Get PDF
    The verification and validation process of embedded critical systems requires providing evidence of their functional correctness and also that their non-functional behavior stays within limits. In this work, we focus on power peaks, which may cause voltage droops and thus, challenge performance to preserve correct operation upon droops. In this line, the use of complex software and hardware in critical embedded systems jeopardizes the confidence that can be placed on the tests carried out during the campaigns performed at analysis. This is so because it is unknown whether tests have triggered the highest power peaks that can occur during operation and whether any such peak can occur systematically. In this paper we propose the use of randomization, already used for timing analysis of real-time systems, as an enabler to guarantee that (1) tests expose those peaks that can arise during operation and (2) peaks cannot occur systematically inadvertently.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P, the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772773), and the HiPEAC Network of Excellence. MINECO partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft

    Towards limiting the impact of timing anomalies in complex real-time processors

    Get PDF
    Timing verification of embedded critical real-time systems is hindered by complex designs. Timing anomalies, deeply analyzed in static timing analysis, require specific solutions to bound their impact. For the first time, we study the concept and impact of timing anomalies in measurement-based timing analysis, the most used in industry, showing that they require to be considered and handled differently. In addition, we analyze anomalies in the context of Measurement-Based Probabilistic Timing Analysis, which simplifies quantifying their impact.Peer ReviewedPostprint (published version

    Software Time Reliability in the Presence of Cache Memories

    Get PDF
    The use of caches challenges measurement-based timing analysis (MBTA) in critical embedded systems. In the presence of caches, the worst-case timing behavior of a system heavily depends on how code and data are laid out in cache. Guaranteeing that test runs capture, and hence MBTA results are representative of, the worst-case conflictive cache layouts, is generally unaffordable for end users. The probabilistic variant of MBTA, MBPTA, exploits randomized caches and relieves the user from the burden of concocting layouts. In exchange, MBPTA requires the user to control the number of runs so that a solid probabilistic argument can be made about having captured the effect of worst-case cache conflicts during analysis. We present a computationally tractable Time-aware Address Conflict (TAC) mechanism that determines whether the impact of conflictive memory layouts is indeed captured in the MBPTA runs and prompts the user for more runs in case it is not.The research leading to these results has received funding from the European Community's FP7 [FP7/2007-2013] under the PROXIMA Project (www.proximaproject. eu), grant agreement no 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015- 65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    STT-MRAM for real-time embedded systems: performance and WCET implications

    Get PDF
    STT-MRAM is an emerging non-volatile memory quickly approaching DRAM in terms of capacity, frequency and device size. Intensified efforts in STT-MRAM research by the memory manufacturers may indicate a revolution with STT-MRAM memory technology is imminent, and therefore it is essential to perform system level research to explore use-cases and identify computing domains that could benefit from this technology. Special STT-MRAM features such as intrinsic radiation hardness, non-volatility, zero stand-by power and capability to function in extreme temperatures makes it particularly suitable for aerospace, avionics and automotive applications. Such applications often have real-time requirements --- that is, certain tasks must complete within a strict deadline. Analyzing whether this deadline is met requires Worst Case Execution Time (WCET) Analysis, which is a fundamental part of evaluating any real-time system. In this study, we investigate the feasibility of using STT-MRAM in real-time embedded systems by analyzing average system performance impact and WCET implications.This work was supported by BSC, Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project and by the Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272). This work has also received funding from the European Union’s Horizon 2020 research and innovation programme under ExaNoDe project (grant agreement No 671578). Jaume Abella was partially supported by the Ministry of Economy and Competitive-ness under Ramon y Cajal postdoctoral fellowship RYC-2013-14717.Peer ReviewedPostprint (author's final draft
    corecore