31 research outputs found

    Development of a framework for automated systematic testing of safety-critical embedded systems

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”In this paper we introduce the development of a framework for testing safety-critical embedded systems based on the concepts of model-based testing. In model-based testing the test cases are derived from a model of the system under test. In our approach the model is an automaton model that is automatically extracted from the C-source code of the system under test. Beside random test data generation the test case generation uses formal methods, in detail model checking techniques. To find appropriate test cases we use the requirements defined in the system specification. To cover further execution paths we developed an additional, to our best knowledge, novel method based on special structural coverage criteria. We present preliminary results on the model extraction using a concrete industrial case study from the automotive domain

    On the tailoring of CAST-32A certification guidance to real COTS multicore architectures

    Get PDF
    The use of Commercial Off-The-Shelf (COTS) multicores in real-time industry is on the rise due to multicores' potential performance increase and energy reduction. Yet, the unpredictable impact on timing of contention in shared hardware resources challenges certification. Furthermore, most safety certification standards target single-core architectures and do not provide explicit guidance for multicore processors. Recently, however, CAST-32A has been presented providing guidance for software planning, development and verification in multicores. In this paper, from a theoretical level, we provide a detailed review of CAST-32A objectives and the difficulty of reaching them under current COTS multicore design trends; at experimental level, we assess the difficulties of the application of CAST-32A to a real multicore processor, the NXP P4080.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal grant RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Portable data exchange for remote-testing frameworks

    Get PDF
    To communicate between heterogeneous computer systems, mechanisms for data conversion are necessary. In this paper we present a portable, asymmetric data conversion method that is suitable for remote testing frameworks in embedded systems development. The described method takes the resource limitations of embedded systems into account by doing the data conversion at the testing host. The method can be implemented as platform-independent source code and it avoids the need of recompiling the code of a communication partner if the code of the other communication partner is migrated to a different platform

    Mitigating Software-Instrumentation Cache Effects in Measurement-Based Timing Analysis

    Get PDF
    Measurement-based timing analysis (MBTA) is often used to determine the timing behaviour of software programs embedded in safety-aware real-time systems deployed in various industrial domains including automotive and railway. MBTA methods rely on some form of instrumentation, either at hardware or software level, of the target program or fragments thereof to collect execution-time measurement data. A known drawback of software-level instrumentation is that instrumentation itself does affect the timing and functional behaviour of a program, resulting in the so-called probe effect: leaving the instrumentation code in the final executable can negatively affect average performance and could not be even admissible under stringent industrial qualification and certification standards; removing it before operation jeopardizes the results of timing analysis as the WCET estimates on the instrumented version of the program cannot be valid any more due, for example, to the timing effects incurred by different cache alignments. In this paper, we present a novel approach to mitigate the impact of instrumentation code on cache behaviour by reducing the instrumentation overhead while at the same time preserving and consolidating the results of timing analysis

    Towards limiting the impact of timing anomalies in complex real-time processors

    Get PDF
    Timing verification of embedded critical real-time systems is hindered by complex designs. Timing anomalies, deeply analyzed in static timing analysis, require specific solutions to bound their impact. For the first time, we study the concept and impact of timing anomalies in measurement-based timing analysis, the most used in industry, showing that they require to be considered and handled differently. In addition, we analyze anomalies in the context of Measurement-Based Probabilistic Timing Analysis, which simplifies quantifying their impact.Peer ReviewedPostprint (published version

    Optimizing compilation with preservation of structural code coverage metrics to support software testing

    Get PDF
    Code-coverage-based testing is a widely-used testing strategy with the aim of providing a meaningful decision criterion for the adequacy of a test suite. Code-coverage-based testing is also mandated for the development of safety-critical applications; for example, the DO178b document requires the application of the modified condition/decision coverage. One critical issue of code-coverage testing is that structural code coverage criteria are typically applied to source code whereas the generated machine code may result in a different code structure because of code optimizations performed by a compiler. In this work, we present the automatic calculation of coverage profiles describing which structural code-coverage criteria are preserved by which code optimization, independently of the concrete test suite. These coverage profiles allow to easily extend compilers with the feature of preserving any given code-coverage criteria by enabling only those code optimizations that preserve it. Furthermore, we describe the integration of these coverage profile into the compiler GCC. With these coverage profiles, we answer the question of how much code optimization is possible without compromising the error-detection likelihood of a given test suite. Experimental results conclude that the performance cost to achieve preservation of structural code coverage in GCC is rather low.Peer reviewedSubmitted Versio

    STT-MRAM for real-time embedded systems: performance and WCET implications

    Get PDF
    STT-MRAM is an emerging non-volatile memory quickly approaching DRAM in terms of capacity, frequency and device size. Intensified efforts in STT-MRAM research by the memory manufacturers may indicate a revolution with STT-MRAM memory technology is imminent, and therefore it is essential to perform system level research to explore use-cases and identify computing domains that could benefit from this technology. Special STT-MRAM features such as intrinsic radiation hardness, non-volatility, zero stand-by power and capability to function in extreme temperatures makes it particularly suitable for aerospace, avionics and automotive applications. Such applications often have real-time requirements --- that is, certain tasks must complete within a strict deadline. Analyzing whether this deadline is met requires Worst Case Execution Time (WCET) Analysis, which is a fundamental part of evaluating any real-time system. In this study, we investigate the feasibility of using STT-MRAM in real-time embedded systems by analyzing average system performance impact and WCET implications.This work was supported by BSC, Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project and by the Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272). This work has also received funding from the European Union’s Horizon 2020 research and innovation programme under ExaNoDe project (grant agreement No 671578). Jaume Abella was partially supported by the Ministry of Economy and Competitive-ness under Ramon y Cajal postdoctoral fellowship RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Measurement-Based Worst-Case Execution Time Estimation Using the Coefficient of Variation

    Get PDF
    Extreme Value Theory (EVT) has been historically used in domains such as finance and hydrology to model worst-case events (e.g., major stock market incidences). EVT takes as input a sample of the distribution of the variable to model and fits the tail of that sample to either the Generalised Extreme Value (GEV) or the Generalised Pareto Distribution (GPD). Recently, EVT has become popular in real-time systems to derive worst-case execution time (WCET) estimates of programs. However, the application of EVT is not straightforward and requires a detailed analysis of, and customisation for, the particular problem at hand. In this article, we tailor the application of EVT to timing analysis. To that end, (1) we analyse the response time of different hardware resources (e.g., cache memories) and identify those that may lead to radically different types of execution time distributions. (2) We show that one of these distributions, known as mixture distribution, causes problems in the use of EVT. In particular, mixture distributions challenge not only properly selecting GEV/GPD parameters (i.e., location, scale and shape) but also determining the size of the sample to ensure that enough tail values are passed to EVT and that only tail values are used by EVT to fit GEV/GPD. Failing to select these parameters has a negative impact on the quality of the derived WCET estimates. We tackle these problems, by (3) proposing Measurement-Based Probabilistic Timing Analysis using the Coefficient of Variation (MBPTA-CV), a new mixture-distribution aware, WCET-suited MBPTA method that builds on recent EVT developments in other fields (e.g., finance) to automatically select the distribution parameters that best fit the maxima of the observed execution times. Our results on a simulation environment and a real board show that MBPTA-CV produces high-quality WCET estimates.The research leading to these results has received funding from the European Community’s FP7 [FP7/2007- 2013] under the PROXIMA Project (www.proxima-project.eu), grant 611085. This work has also been par- tially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella was partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Specifying subtypes in SCJ programs

    Full text link
    Modular reasoning about programs that use subtypes requires that an overriding method in a subtype obeys the specifications of all methods that it overrides. For example, if method m is specified in a supertype T to take at most 42 nanoseconds to execute, then m cannot take more than 42 nanoseconds to execute in any subtype of T. Subtyping is an important aid to maintenance of programs, since it allows one to write polymorphic code (reducing code size and increasing reuse), and allows for convenient extension and enhancement of programs, all of which could be very useful in real-time programming. In this paper we show how to specify timing constraints for subtypes in a way that: permits modular reasoning about timing constraints, supports subtype polymorphism and object-oriented design patterns, and still permits precise reasoning about execution times. This technique supports object-oriented coding and design patterns based on subtype polymorphism, with all their maintenance advantages, to be used in real-time software. © 2011 ACM
    corecore