264,227 research outputs found

    Cross-platform verification framework for embedded systems

    Get PDF
    Many innovations in the automotive sector involve complex electronics and embedded software systems. Testing techniques are one of the key methodologies for detecting faults in such embedded systems.In this paper, a novel cross-platform verification framework including automated test-case generation by model checking is introduced. Comparing the execution behavior of a program instance running on a certain platform to the execution behavior of the same program running on a different platform we denote cross-platform verification. The framework supports various types of coverage criteria. It turned out that end-to-end testing is of high importance due to defects occurring on the actual target platform for the first time.Additionally, formal verification can be applied for checking requirements resulting from the specification using the same model generation mechanism that is used for test data generation. Due to a novel self-assessment mechanism, the confidence into the formal models is increased significantly.We provide a case study for the Motorola embedded controller HCS12 that is heavily used by the automotive industry. We perform structural tests on industrial code patterns using a wide-spread industrial compiler. Using our technique, we found two severe compiler defects that have been corrected in subsequent releases

    Formal Verification of a Rover Anti-collision System

    Get PDF
    In this paper, we integrate inductive proof, bounded model checking, test case generation and equivalence proof techniques to verify an embedded system. This approach is implemented using Systerel Smart Solver (S3) toolset. It is applied to verify properties at system, software, and code levels. The verification process is illustrated on an anti-collision system (ARP for Automatic Rover Protection) implemented on-board a rover. Focus is placed on the verification of safety and functional properties and the proof of equivalence between the design model and the generated code

    Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    Get PDF
    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work

    Activity recognition using Grey-Markov model

    Get PDF
    Activity Recognition (AR) is a process of identifying actions and goals of one or more agents of interest. AR techniques have been applied to both large and small scale activity identification. Examples of AR techniques include Genetic Algorithm, Markov Chain, and so on. This research proposes a novel method, Grey Markov Model (GMM), for detection and prediction of pre-defined activities. There were three objectives of this research. The first objective was to establish a database of pre-defined human activities. The second objective was to establish the Grey Markov Model. The final objective was to verify the model performance using the established database. This thesis describes the methodology of test setup and data collection, as well as the procedures of model generation. Furthermore, experimental results of model performance verification test are also reported

    Automatic vector generation guided by a functional metric

    Get PDF
    Verification is still the bottleneck of the complex digital system design process. Formal techniques have advanced in their capacity to handle more complex descriptions, but they still suffer from problems of memory or time explosion. Simulation-based techniques handle descriptions of any size or complexity, but the efficiency of these techniques is reduced with the increase in the system complexity because of the exponential increase in the number of simulation tests necessary to maintain the coverage. Semi-formal techniques combine the advantages of simulation and formal techniques as they increase the efficiency of simulation-based verification. In this area, several research works have introduced techniques that automate the generation of vectors driven by traditional coverage metrics. However, these techniques do not ensure the detection of 100% of faults. This paper presents a novel technique for the generation of vectors. A major benefit of the technique is the more efficient generation of test-benches than when using techniques based on structural metrics. The technique introduced is more efficient since it relies on a novel coverage metric, which is more directly correlated to functional faults than structural coverage metrics (line, branch, etc.). The proposed coverage metric is based on an abstraction of the system as a set of polynomials where all system behaviours are described by a set of coefficients. By assuming a finite precision of coefficients and a maximum degree of polynomials, all the system behaviors, including both the correct and the incorrect ones, can be modeled. This technique applies mathematical theories (computer algebra and number theory) to calculate the coverage and to generate vectors which maximize coverage. Moreover, in this work, a tool which implements the technique has been developed. This tool takes a C-based system description and provides the coverage and the generated vectors as output

    Translating expert system rules into Ada code with validation and verification

    Get PDF
    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system

    Assessing Traceability of Software Engineering Artifacts

    Get PDF
    The generation of traceability links or traceability matrices is vital to many software engineering activities. It is also person-power intensive, time-consuming, error-prone, and lacks tool support. The activities that require traceability information include, but are not limited to, risk analysis, impact analysis, criticality assessment, test coverage analysis, and verification and validation of software systems. Information Retrieval (IR) techniques have been shown to assist with the automated generation of traceability links by reducing the time it takes to generate the traceability mapping. Researchers have applied techniques such as Latent Semantic Indexing (LSI), vector space retrieval, and probabilistic IR and have enjoyed some success. This paper concentrates on examining issues not previously widely studied in the context of traceability: the importance of the vocabulary base used for tracing and the evaluation and assessment of traceability mappings and methods using secondary measures. We examine these areas and perform empirical studies to understand the importance of each to the traceability of software engineering artifacts

    Metamodel Instance Generation: A systematic literature review

    Get PDF
    Modelling and thus metamodelling have become increasingly important in Software Engineering through the use of Model Driven Engineering. In this paper we present a systematic literature review of instance generation techniques for metamodels, i.e. the process of automatically generating models from a given metamodel. We start by presenting a set of research questions that our review is intended to answer. We then identify the main topics that are related to metamodel instance generation techniques, and use these to initiate our literature search. This search resulted in the identification of 34 key papers in the area, and each of these is reviewed here and discussed in detail. The outcome is that we are able to identify a knowledge gap in this field, and we offer suggestions as to some potential directions for future research.Comment: 25 page
    corecore