109,397 research outputs found

    Reusable Models for Timing and Liveness Analysis of Middleware for Distributed Real-Time and Embedded Systems

    Get PDF
    Distributed real-time and embedded (DRE) systems have stringent constraints on timeliness and other properties whose assurance is crucial to correct system behavior. Formal tools and techniques play a key role in verifying and validating system properties. However, many DRE systems are built using middleware frameworks that have grown increasingly complex to address the diverse requirements of a wide range of applications. How to apply formal tools and techniques effectively to these systems, given the range of middleware configuration options available, is therefore an important research problem. This paper makes three contributions to research on formal verification and validation of middleware-based DRE systems. First, it presents a reusable library of formal models we have developed to capture essential timing and concurrency semantics of foundational middleware building blocks provided by the ACE framework. Second, it describes domain-specific techniques to reduce the cost of checking those models while ensuring they remain valid with respect to the semantics of the middleware itself. Third, it presents a verification and validation case study involving a gateway service, using our models

    Incremental Lifecycle Validation Of Knowledge-based Systems Through Commonkads

    Get PDF
    This dissertation introduces a novel validation method for knowledge-based systems (KBS). Validation is an essential phase in the development lifecycle of knowledge-based systems. Validation ensures that the system is valid, reliable and that it reflects the knowledge of the expert and meets the specifications. Although many validation methods have been introduced for knowledge-based systems, there is still a need for an incremental validation method based on a lifecycle model. Lifecycle models provide a general framework for the developer and a mapping technique from the system into the validation process. They support reusability, modularity and offer guidelines for knowledge engineers to achieve high quality systems. CommonKADS is a set of models that helps to represent and analyze knowledge-based systems. It offers a de facto standard for building knowledge-based systems. Additionally, CommonKADS is a knowledge representation-independent model. It has powerful models that can represent many domains. Defining an incremental validation method based on a conceptual lifecycle model (such as CommonKADS) has a number of advantages such as reducing time and effort, ease of implementation when having a template to follow, well-structured design, and better tracking of errors when they occur. Moreover, the validation method introduced in this dissertation is based on case testing and selecting an appropriate set of test cases to validate the system. The validation method defined makes use of results of prior test cases in an incremental validation procedure. This facilitates defining a minimal set of test cases that provides complete and effective system coverage. CommonKADS doesn’t define validation, verification or testing in any of its models. This research seeks to establish a direct relation between validation and lifecycle models, and introduces a validation method for KBS embedded into CommonKAD

    Validating Predictions of Unobserved Quantities

    Full text link
    The ultimate purpose of most computational models is to make predictions, commonly in support of some decision-making process (e.g., for design or operation of some system). The quantities that need to be predicted (the quantities of interest or QoIs) are generally not experimentally observable before the prediction, since otherwise no prediction would be needed. Assessing the validity of such extrapolative predictions, which is critical to informed decision-making, is challenging. In classical approaches to validation, model outputs for observed quantities are compared to observations to determine if they are consistent. By itself, this consistency only ensures that the model can predict the observed quantities under the conditions of the observations. This limitation dramatically reduces the utility of the validation effort for decision making because it implies nothing about predictions of unobserved QoIs or for scenarios outside of the range of observations. However, there is no agreement in the scientific community today regarding best practices for validation of extrapolative predictions made using computational models. The purpose of this paper is to propose and explore a validation and predictive assessment process that supports extrapolative predictions for models with known sources of error. The process includes stochastic modeling, calibration, validation, and predictive assessment phases where representations of known sources of uncertainty and error are built, informed, and tested. The proposed methodology is applied to an illustrative extrapolation problem involving a misspecified nonlinear oscillator

    Towards the Model-Driven Engineering of Secure yet Safe Embedded Systems

    Full text link
    We introduce SysML-Sec, a SysML-based Model-Driven Engineering environment aimed at fostering the collaboration between system designers and security experts at all methodological stages of the development of an embedded system. A central issue in the design of an embedded system is the definition of the hardware/software partitioning of the architecture of the system, which should take place as early as possible. SysML-Sec aims to extend the relevance of this analysis through the integration of security requirements and threats. In particular, we propose an agile methodology whose aim is to assess early on the impact of the security requirements and of the security mechanisms designed to satisfy them over the safety of the system. Security concerns are captured in a component-centric manner through existing SysML diagrams with only minimal extensions. After the requirements captured are derived into security and cryptographic mechanisms, security properties can be formally verified over this design. To perform the latter, model transformation techniques are implemented in the SysML-Sec toolchain in order to derive a ProVerif specification from the SysML models. An automotive firmware flashing procedure serves as a guiding example throughout our presentation.Comment: In Proceedings GraMSec 2014, arXiv:1404.163

    Cyber-Virtual Systems: Simulation, Validation & Visualization

    Full text link
    We describe our ongoing work and view on simulation, validation and visualization of cyber-physical systems in industrial automation during development, operation and maintenance. System models may represent an existing physical part - for example an existing robot installation - and a software simulated part - for example a possible future extension. We call such systems cyber-virtual systems. In this paper, we present the existing VITELab infrastructure for visualization tasks in industrial automation. The new methodology for simulation and validation motivated in this paper integrates this infrastructure. We are targeting scenarios, where industrial sites which may be in remote locations are modeled and visualized from different sites anywhere in the world. Complementing the visualization work, here, we are also concentrating on software modeling challenges related to cyber-virtual systems and simulation, testing, validation and verification techniques for them. Software models of industrial sites require behavioural models of the components of the industrial sites such as models for tools, robots, workpieces and other machinery as well as communication and sensor facilities. Furthermore, collaboration between sites is an important goal of our work.Comment: Preprint, 9th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE 2014

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    Combining SysML and AADL for the design, validation and implementation of critical systems

    Get PDF
    The realization of critical systems goes through multiple phases of specification, design, integration, validation, and testing. It starts from high-level sketches down to the final product. Model-Based Design has been acknowledged as a good conveyor to capture these steps. Yet, there is no universal solution to represent all activities. Two candidates are the OMG-based SysML to perform high-level modeling tasks, and the SAE AADL to perform lower-level ones, down to the implementation. The paper shares an experience on the seamless use of SysML and the AADL to model, validate/verify and implement a flight management system

    Validate implementation correctness using simulation: the TASTE approach

    Get PDF
    High-integrity systems operate in hostile environment and must guarantee a continuous operational state, even if unexpected events happen. In addition, these systems have stringent requirements that must be validated and correctly translated from high-level specifications down to code. All these constraints make the overall development process more time-consuming. This becomes especially complex because the number of system functions keeps increasing over the years. As a result, engineers must validate system implementation and check that its execution conforms to the specifications. To do so, a traditional approach consists in a manual instrumentation of the implementation code to trace system activity while operating. However, this might be error-prone because modifications are not automatic and still made manually. Furthermore, such modifications may have an impact on the actual behavior of the system. In this paper, we present an approach to validate a system implementation by comparing execution against simulation. In that purpose, we adapt TASTE, a set of tools that eases system development by automating each step as much as possible. In particular, TASTE automates system implementation from functional (system functions description with their properties – period, deadline, priority, etc.) and deployment(processors, buses, devices to be used) models. We tailored this tool-chain to create traces during system execution. Generated output shows activation time of each task, usage of communication ports (size of the queues, instant of events pushed/pulled, etc.) and other relevant execution metrics to be monitored. As a consequence, system engineers can check implementation correctness by comparing simulation and execution metrics

    What is the method in applying formal methods to PLC applications?

    Get PDF
    The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system development should be made very clear. Developers and customers ought to understand very well what they can rely on or not, and we see our task in trying to make this explicit. Therefore, for us the answer to the question above leads to the following questions: Which parts of the system can be treated formally? What formal methods and tools can be applied? What does their successful application tell (or does not) about the proper functioning of the whole system
    corecore