4,125 research outputs found

    Rigorous development process of a safety-critical system: from ASM models to Java code

    Get PDF
    The paper presents an approach for rigorous development of safety-critical systems based on the Abstract State Machine formal method. The development process starts from a high level formal view of the system and, through refinement, derives more detailed models till the desired level of specification. Along the process, different validation and verification activities are available, as simulation, model review, and model checking. Moreover, each refinement step can be proved correct using an SMT-based approach. As last step of the refinement process, a Java implementation can be developed and linked to the formal specification. The correctness of the implementation w.r.t. its formal specification can be proved by means of model-based testing and runtime verification. The process is exemplified by using a Landing Gear System as case study

    Towards a Method for Combined Model-based Testing and Analysis

    Get PDF

    High-level verification flow for a high-level synthesis-based digital logic design

    Get PDF
    Abstract. High-level synthesis (HLS) is a method for generating register-transfer level (RTL) hardware description of digital logic designs from high-level languages, such as C/C++/SystemC or MATLAB. The performance and productivity benefits of HLS stem from the untimed, high abstraction level input languages. Another advantage is that the design and verification can focus on the features and high-level architecture, instead of the low-level implementation details. The goal of this thesis was to define and implement a high-level verification (HLV) flow for an HLS design written in C++. The HLV flow takes advantage of the performance and productivity of C++ as opposed to hardware description languages (HDL) and minimises the required RTL verification work. The HLV flow was implemented in the case study of the thesis. The HLS design was verified in a C++ verification environment, and Catapult Coverage was used for pre-HLS coverage closure. Post-HLS verification and coverage closure were done in Universal Verification Methodology (UVM) environment. C++ tests used in the pre-HLS coverage closure were reimplemented in UVM, to get a high initial RTL coverage without manual RTL code analysis. The pre-HLS C++ design was implemented as a predictor into the UVM testbench to verify the equivalence of C++ versus RTL and to speed up post-HLS coverage closure. Results of the case study show that the HLV flow is feasible to implement in practice. The flow shows significant performance and productivity gains of verification in the C++ domain when compared to UVM. The UVM implementation of a somewhat incomplete set of pre-HLS tests and formal exclusions resulted in an initial post-HLS coverage of 96.90%. The C++ predictor implementation was a valuable tool in post-HLS coverage closure. A total of four weeks of coverage work in pre- and post-HLS phases was required to reach 99% RTL coverage. The total time does not include the time required to build both C++ and UVM verification environments.Korkean tason verifiointivuo korkean tason synteesiin perustuvalle digitaalilogiikkasuunnitelmalle. Tiivistelmä. Korkean tason synteesi (HLS) on menetelmä, jolla generoidaan rekisterisiirtotason (RTL) laitteistokuvausta digitaalisille logiikkasuunnitelmille käyttäen korkean tason ohjelmointikieliä, kuten C-pohjaisia kieliä tai MATLAB:ia. HLS:n suorituskykyyn ja tuottavuuteen liittyvät hyödyt perustuvat ohjelmointikielien tarjoamaan korkeampaan abstraktiotasoon. HLS:ää käyttäen suunnittelu- ja varmennustyö voi keskittyä ominaisuuksiin ja korkean tason arkkitehtuuriin matalan tason yksityiskohtien sijaan. Tämän diplomityön tavoite oli määritellä ja implementoida korkean tason verifiointivuo (HLV-vuo) C++:lla kirjoitetulle HLS-suunnitelmalle. HLV-vuo hyödyntää ohjelmointikielien tarjoamaa suorituskykyä ja korkeampaa abstraktion tasoa kovonkuvauskielien sijaan ja siten minimoi RTL:n varmennukseen vaadittavaa työtä. HLV vuo implementoitiin tapaustutkimuksessa. HLS-suunnitelma varmennettiin C++ -verifiointiympäristössä, ja Catapult Coveragea käytettiin kattavuuden analysointiin. RTL-kattavuutta mitattiin universaalilla verifiointimetodologialla (UVM) tehdyssä ympäristössä. C++ varmennuksessa käytetyt testivektorit implementoitiin uudelleen UVM-ympäristössä, jotta RTL-kattavuuden lähtötaso olisi korkea ilman manuaalista RTL-analyysiä. C++-suunnitelma implementoitiin prediktorina (referenssimallina) UVM-testipenkkiin koodikattavuuden parantamiseksi. Tapaustutkimuksen tulokset osoittavat, että määritelty HLV-vuo on toteutettavissa käytännössä. Vuota käyttämällä saavutetaan merkittäviä suorituskyky- ja tuottavuusetuja C++ -testiympäristössä verrattuna UVM-ympäristöön. 90.60% koodikattavuuden saavuttavien C++ testivektoreiden uudelleenimplementoiti UVM-ympäristössä tuotti 96.90% RTL-kattavuuden. C++-predictorin implementointi oli merkittävä työkalu RTL-kattavuustavoitteen saavuttamisessa

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Expressing best practices in (risk) analysis and testing of safety-critical systems using patterns

    Get PDF
    The continuing pervasion of our society with safety-critical cyber-physical systems not only demands for adequate (risk) analysis, testing and verification techniques, it also generates growing experience on their use, which can be considered as important as the tools themselves for their efficient use. This paper introduces workflow patterns to describe such best practices in a systematic way that efficiently represents this knowledge, and also provides a way to relate different patterns, making them easier to identify and use, and cover as wide a range of experiences as possible. The value of the approach is demonstrated using some pattern examples from a collection developed in the Artemis-project MBAT. Finally, the paper presents a wiki-based approach for developing and maintaining the pattern collection

    Automated specification-based testing of graphical user interfaces

    Get PDF
    Tese de doutoramento. Engenharia Electrónica e de Computadores. 2006. Faculdade de Engenharia. Universidade do Porto, Departamento de Informática, Escola de Engenharia. Universidade do Minh

    Compiler fuzzing: how much does it matter?

    Get PDF
    Despite much recent interest in randomised testing (fuzzing) of compilers, the practical impact of fuzzer-found compiler bugs on real-world applications has barely been assessed. We present the first quantitative and qualitative study of the tangible impact of miscompilation bugs in a mature compiler. We follow a rigorous methodology where the bug impact over the compiled application is evaluated based on (1) whether the bug appears to trigger during compilation; (2) the extent to which generated assembly code changes syntactically due to triggering of the bug; and (3) whether such changes cause regression test suite failures, or whether we can manually find application inputs that trigger execution divergence due to such changes. The study is conducted with respect to the compilation of more than 10 million lines of C/C++ code from 309 Debian packages, using 12% of the historical and now fixed miscompilation bugs found by four state-of-the-art fuzzers in the Clang/LLVM compiler, as well as 18 bugs found by human users compiling real code or as a by-product of formal verification efforts. The results show that almost half of the fuzzer-found bugs propagate to the generated binaries for at least one package, in which case only a very small part of the binary is typically affected, yet causing two failures when running the test suites of all the impacted packages. User-reported and formal verification bugs do not exhibit a higher impact, with a lower rate of triggered bugs and one test failure. The manual analysis of a selection of the syntactic changes caused by some of our bugs (fuzzer-found and non fuzzer-found) in package assembly code, shows that either these changes have no semantic impact or that they would require very specific runtime circumstances to trigger execution divergence

    TOOL-ASSISTED VALIDATION AND VERIFICATION TECHNIQUES FOR STATE-BASED FORMAL METHODS

    Get PDF
    To tackle the growing complexity of developing modern software systems that usually have embedded and distributed nature, and more and more involve safety critical aspects, formal methods (FMs) have been affirmed as an efficient approach to ensure the quality and correctness of the design, that permits to discover errors yet at the early stages of the system development. Among the several FMs available, some of them can be described as state-based, since they describe systems by using the notions of state and transitions between states. State-based FMs are sometimes preferred since they produce specifications that are more intuitive, being the notions of state and transition close to the notions of program state and program execution that are familiar to any developer. Moreover, state-based FMs are usually executable and permit to be simulated, so having an abstraction of the execution of the system under development. The aim of the thesis is to provide tool-assisted techniques that help the adoption of state-based FMs. In particular we address four main goals: 1) identifying a process for the development of an integrated framework around a formal method. The adoption of a formal method is often prevented by the lack of tools to support the user in the different development activities, as model editing, validation, verification, etc. Moreover, also when tools are available, they have usually been developed to target only one aspect of the system development process. So, having a well-engineered process that helps in the development of concrete notations and tools for a FM can make FMs of practical application. 2) promoting the integration of different FMs. Indeed, having only one formal notation, for doing different formal activities during the development of the system, is preferable than having a different notation for each formal activity. Moreover such notation should be high-level: working with high level notations is definitely easier than working with low-level ones, and the produced specifications are usually more readable. This goal can be seen as a sub-goal of the first goal; indeed, in a framework around a formal method, it should also be possible to integrate other formal methods that better address some particular formal activities. 3) helping the user in writing correct specifications. The basic assumption of any formal technique is that the specification, representing the desired properties of the system or the model of the system, is correct. However, in case the specification is not correct, all the verification activities based on the specification produce results that are meaningless. So, validation techniques should assure that the specification reflects the intended requirements; besides traditional simulation (user-guided or scenario-based), also model review techniques, checking for common quality attributes that any specification should have, are a viable solution. 4) reducing the distance between the formal specification and the actual implementation of the system. Several FMs work on a formal description of the system which is assumed to reflect the actual implementation; however, in practice, the formal specification and the actual implementation could be not conformant. A solution is to obtain the implementation, through refinements steps, from the formal specification, and proving that the refinements steps are correct. A different viable solution is to link the implementation with its formal specification and check, during the program execution, if they are conformant

    Using contexts to extract models from code

    No full text
    Behaviour models facilitate the understanding and analysis of software systems by providing an abstract view of their behaviours and also by enabling the use of validation and verification techniques to detect errors. However, depending on the size and complexity of these systems, constructing models may not be a trivial task, even for experienced developers. Model extraction techniques can automatically obtain models from existing code, thus reducing the effort and expertise required of engineers and helping avoid errors often present in manually constructed models. Existing approaches for model extraction often fail to produce faithful models, either because they only consider static information, which may include infeasible behaviours, or because they are based only on dynamic information, thus relying on observed executions, which usually results in incomplete models. This paper describes a model extraction approach based on the concept of contexts, which are abstractions of concrete states of a program, combining static and dynamic information. Contexts merge some of the advantages of using either type of information and, by their combination, can overcome some of their problems. The approach is partially implemented by a tool called LTS Extractor, which translates information collected from execution traces produced by instrumented Java code to labelled transition systems (LTS), which can be analysed in an existing verification tool. Results from case studies are presented and discussed, showing that, considering a certain level of abstraction and a set of execution traces, the produced models are correct descriptions of the programs from which they were extracted. Thus, they can be used for a variety of analyses, such as program understanding, validation, verification, and evolution

    Analysis and Test of the Effects of Single Event Upsets Affecting the Configuration Memory of SRAM-based FPGAs

    Get PDF
    SRAM-based FPGAs are increasingly relevant in a growing number of safety-critical application fields, ranging from automotive to aerospace. These application fields are characterized by a harsh radiation environment that can cause the occurrence of Single Event Upsets (SEUs) in digital devices. These faults have particularly adverse effects on SRAM-based FPGA systems because not only can they temporarily affect the behaviour of the system by changing the contents of flip-flops or memories, but they can also permanently change the functionality implemented by the system itself, by changing the content of the configuration memory. Designing safety-critical applications requires accurate methodologies to evaluate the system’s sensitivity to SEUs as early as possible during the design process. Moreover it is necessary to detect the occurrence of SEUs during the system life-time. To this purpose test patterns should be generated during the design process, and then applied to the inputs of the system during its operation. In this thesis we propose a set of software tools that could be used by designers of SRAM-based FPGA safety-critical applications to assess the sensitivity to SEUs of the system and to generate test patterns for in-service testing. The main feature of these tools is that they implement a model of SEUs affecting the configuration bits controlling the logic and routing resources of an FPGA device that has been demonstrated to be much more accurate than the classical stuck-at and open/short models, that are commonly used in the analysis of faults in digital devices. By keeping this accurate fault model into account, the proposed tools are more accurate than similar academic and commercial tools today available for the analysis of faults in digital circuits, that do not take into account the features of the FPGA technology.. In particular three tools have been designed and developed: (i) ASSESS: Accurate Simulator of SEuS affecting the configuration memory of SRAM-based FPGAs, a simulator of SEUs affecting the configuration memory of an SRAM-based FPGA system for the early assessment of the sensitivity to SEUs; (ii) UA2TPG: Untestability Analyzer and Automatic Test Pattern Generator for SEUs Affecting the Configuration Memory of SRAM-based FPGAs, a static analysis tool for the identification of the untestable SEUs and for the automatic generation of test patterns for in-service testing of the 100% of the testable SEUs; and (iii) GABES: Genetic Algorithm Based Environment for SEU Testing in SRAM-FPGAs, a Genetic Algorithm-based Environment for the generation of an optimized set of test patterns for in-service testing of SEUs. The proposed tools have been applied to some circuits from the ITC’99 benchmark. The results obtained from these experiments have been compared with results obtained by similar experiments in which we considered the stuck-at fault model, instead of the more accurate model for SEUs. From the comparison of these experiments we have been able to verify that the proposed software tools are actually more accurate than similar tools today available. In particular the comparison between results obtained using ASSESS with those obtained by fault injection has shown that the proposed fault simulator has an average error of 0:1% and a maximum error of 0:5%, while using a stuck-at fault simulator the average error with respect of the fault injection experiment has been 15:1% with a maximum error of 56:2%. Similarly the comparison between the results obtained using UA2TPG for the accurate SEU model, with the results obtained for stuck-at faults has shown an average difference of untestability of 7:9% with a maximum of 37:4%. Finally the comparison between fault coverages obtained by test patterns generated for the accurate model of SEUs and the fault coverages obtained by test pattern designed for stuck-at faults, shows that the former detect the 100% of the testable faults, while the latter reach an average fault coverage of 78:9%, with a minimum of 54% and a maximum of 93:16%
    corecore