67 research outputs found

    A template-based methodology for the specification and automated composition of performability models

    Get PDF
    Dependability and performance analysis of modern systems is facing great challenges: their scale is growing, they are becoming massively distributed, interconnected, and evolving. Such complexity makes model-based assessment a difficult and time-consuming task. For the evaluation of large systems, reusable submodels are typically adopted as an effective way to address the complexity and to improve the maintainability of models. When using state-based models, a common approach is to define libraries of generic submodels, and then compose concrete instances by state sharing, following predefined “patterns” that depend on the class of systems being modeled. However, such composition patterns are rarely formalized, or not even documented at all. In this paper, we address this problem using a model-driven approach, which combines a language to specify reusable submodels and composition patterns, and an automated composition algorithm. Clearly defining libraries of reusable submodels, together with patterns for their composition, allows complex models to be automatically assembled, based on a high-level description of the scenario to be evaluated. This paper provides a solution to this problem focusing on: formally defining the concept of model templates, defining a specification language for model templates, defining an automated instantiation and composition algorithm, and applying the approach to a case study of a large-scale distributed system69129330

    Automated Fault Tolerance Augmentation in Model-Driven Engineering for CPS

    Get PDF
    Cyber-Physical Systems are usually subject to dependability requirements such as safety and reliability constraints. Over the last 50 years, a body of efficient fault-tolerance mechanisms has been devised to handle faults occurring at run-time. However, properly implementing those mechanisms is a time-consuming task that requires a great deal of know-how. In this paper, we propose a general framework which allows system designers to decouple functional and non-functional concerns, and express non- functional properties at design time using domain-specific languages. In the spirit of generative programming, functional models are then automatically “augmented” with dependability mechanisms. Importantly, the real-time behavior of the initial models in terms of sampling times and meeting deadlines is preserved. The practicality of the approach is demonstrated with the automated implementation of one prominent software fault-tolerance pattern, namely N-Version Programming, in the CPAL model-driven engineering workflow

    Finalised dependability framework and evaluation results

    Get PDF
    The ambitious aim of CONNECT is to achieve universal interoperability between heterogeneous Networked Systems by means of on-the-fly synthesis of the CONNECTors through which they communicate. The goal of WP5 within CONNECT is to ensure that the non-functional properties required at each side of the connection going to be established are fulfilled, including dependability, performance, security and trust, or, in one overarching term, CONNECTability. To model such properties, we have introduced the CPMM meta-model which establishes the relevant concepts and their relations, and also includes a Complex Event language to express the behaviour associated with the specified properties. Along the four years of project duration, we have developed approaches for assuring CONNECTability both at synthesis time and at run-time. Within CONNECT architecture, these approaches are supported via the following enablers: the Dependability and Performance analysis Enabler, which is implemented in a modular architecture supporting stochastic verification and state-based analysis. Dependability and performance analysis also relies on approaches for incremental verification to adjust CONNECTor parameters at run-time; the Security Enabler, which implements a Security-by-Contract-with-Trust framework to guarantee the expected security policies and enforce them accordingly to the level of trust; the Trust Manager that implements a model-based approach to mediate between different trust models and ensure interoperable trust management. The enablers have been integrated within the CONNECT architecture, and in particular can interact with the CONNECT event-based monitoring enabler (GLIMPSE Enabler released within WP4) for run-time analysis and verification. To support a Model-driven approach in the interaction with the monitor, we have developed a CPMM editor and a translator from CPMM to the GLIMPSE native language (Drools). In this document that is the final deliverable from WP5 we first present the latest advances in the fourth year concerning CPMM, Dependability&Performance Analysis, Incremental Verification and Security. Then, we make an overall summary of main achievements for the whole project lifecycle. In appendix we also include some relevant articles specifically focussing on CONNECTability that have been prepared in the last period

    Ein verallgemeinerter Prozess zur Verifikation und Validerung von Modellen und Simulationsergebnissen

    Get PDF
    With technologies increasing rapidly, symbolic, quantitative modeling and computer-based simulation (M&S) have become affordable and easy-to-apply tools in numerous application areas as, e.g., supply chain management, pilot training, car safety improvement, design of industrial buildings, or theater-level war gaming. M&S help to reduce the resources required for many types of projects, accelerate the development of technical systems, and enable the control and management of systems of high complexity. However, as the impact of M&S on the real world grows, the danger of adverse effects of erroneous or unsuitable models or simu-lation results also increases. These effects may range from the delayed delivery of an item ordered by mail to hundreds of avoidable casualties caused by the simulation-based acquisi-tion (SBA) of a malfunctioning communication system for rescue teams. In order to benefit from advancing M&S, countermeasures against M&S disadvantages and drawbacks must be taken. Verification and Validation (V&V) of models and simulation results are intended to ensure that only correct and suitable models and simulation results are used. However, during the development of any technical system including models for simulation, numerous errors may occur. The later they are detected, and the further they have propagated through the model development process, the more resources they require to correct thus, their propaga-tion should be avoided. If the errors remain undetected, and major decisions are based on in-correct or unsuitable models or simulation results, no benefit is gained from M&S, but a dis-advantage. This thesis proposes a structured and rigorous approach to support the verification and valida-tion of models and simulation results by a) the identification of the most significant of the current deficiencies of model develop-ment (design and implementation) and use, including the need for more meaningful model documentation and the lack of quality assurance (QA) as an integral part of the model development process; b) giving an overview of current quality assurance measures in M&S and in related areas. The transferability of concepts like the capability maturity model for software (SW-CMM) and the ISO9000 standard is discussed, and potentials and limits of documents such as the VV&A Recommended Practices Guide of the US Defense Modeling and Simulation Office are identified; c) analysis of quality assurance measures and so called V&V techniques for similarities and differences, to amplify their strengths and to reduce their weaknesses. d) identification and discussion of influences that drive the required rigor and intensity of V&V measures (risk involved in using models and simulation results) on the one hand, and that limit the maximum reliability of V&V activities (knowledge about both the real system and the model) on the other. This finally leads to the specification of a generalized V&V process - the V&V Triangle. It illustrates the dependencies between numerous V&V objectives, which are derived from spe-cific potential errors that occur during model development, and provides guidance for achiev-ing these objectives by the association of V&V techniques, required input, and evidence made available. The V&V Triangle is applied to an M&S sample project, and the lessons learned from evaluating the results lead to the formulation of future research objectives in M&S V&V

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
    • …
    corecore