27 research outputs found

    Automatic Generation of RAMS Analyses from Model-based Functional Descriptions using UML State Machines

    Full text link
    In today's industrial practice, safety, reliability or availability artifacts such as fault trees, Markov models or FMEAs are mainly created manually by experts, often distinctively decoupled from systems engineering activities. Significant efforts, costs and timely requirements are involved to conduct the required analyses. In this paper, we describe a novel integrated model-based approach of systems engineering and dependability analyses. The behavior of system components is specified by UML state machines determining intended/correct and undesired/faulty behavior. Based on this information, our approach automatically generates different dependability analyses in the form of fault trees. Hence, alternative system layouts can easily be evaluated. The same applies for simple variations of the logical input-output relations of logical units such as controllers. We illustrate the feasibility of our approach with the help of simple examples using a prototypical implementation of the presented concepts

    A Formal Component-Based Software Engineering Approach For Developing Trustworty Systems

    Get PDF
    Software systems are increasingly becoming ubiquitous, affecting the way we experience the world. Embedded software systems, especially those used in smart devices, have become an essential constituent of the technological infrastructure of modern societies. Such systems, in order to be trusted in society, must be proved to be trustworthy. Trustworthiness is a composite non-functional property that implies safety, timeliness, security, availability, and reliability. This thesis is a contribution to a rigorous development of systems in which trustworthiness property can be specified and formally verified. Developing trustworthy software systems that are complex and used by a large heterogeneous population of users is a challenging task. The component-based software engineering (CBSE) paradigm can provide an effective solution to address these challenges. However, none of the current component-based approaches can be used as is, because all of them lack the essential requirements for constructing trustworthy systems. The three contributions made in this thesis are intended to add to the expressive power needed to raise CBSE practices to a rigorous level for constructing formally verifiable trustworthy systems. The first contribution of the thesis is a formal definition of the trustworthy component model. The trustworthiness quality attributes are introduced as first class structural elements. The behavior of a component is automatically generated as an extended timed automata. A model checking technique is used to verify the properties of trustworthiness. A composition theory that preserves the properties of trustworthiness in a composition is presented. Conventional software engineering development processes are not suitable either for developing component-based systems or for developing trustworthy systems. In order to develop a component-based trustworthy system, the development process must be reuseoriented,component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle. The second contribution of the thesis is a software engineering process model that consists of several parallel tracks of activities including component development, component assessment, component reuse, and component-based system development. The central concern in all activities of this process is ensuring trustworthiness. The third and final contribution of the thesis is a development framework with a comprehensive set of tools supporting the spectrum of formal development activity from modeling to deployment. The proposed approach has been applied to several case studies in the domains of component-based development and safety-critical systems. The experience from the case studies confirms that the approach is suitable for developing large and complex trustworthy systems

    A formal component-based software engineering approach for developing trustworthy systems

    Get PDF
    Software systems are increasingly becoming ubiquitous, affecting the way we experience the world. Embedded software systems, especially those used in smart devices, have become an essential constituent of the technological infrastructure of modem societies. Such systems, in order to be trusted in society, must be proved to be trustworthy. Trustworthiness is a composite non-functional property that implies safety, timeliness, security, availability, and reliability. This thesis is a contribution to a rigorous development of systems in which trustworthiness property can be specified and formally verified. Developing trustworthy software systems that are complex and used by a large heterogenous population of users is a challenging task. The component-based software engineering (CBSE) paradigm can provide an effective solution to address these challenges. However, none of the current component-based approaches can be used as is, because all of them lack the essential requirements for constructing trustworthy systems. The three contributions made in this thesis are intended to add to the expressive power needed to raise CBSE practices to a rigorous level for constructing formally verifiable trustworthy systems. The first contribution of the thesis is a formal definition of the trustworthy component model. The trustworthiness quality attributes are introduced as first class structural elements. The behavior of a component is automatically generated as an extended timed automata. A model checking technique is used to verify the properties of trustworthiness. A composition theory that preserves the properties of trustworthiness in a composition is presented. Conventional software engineering development processes are not suitable either for developing component-based systems or for developing trustworthy systems. In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle. The second contribution of the thesis is a software engineering process model that consists of several parallel tracks of activities including component development, component assessment, component reuse, and component-based system development. The central concern in all activities of this process is ensuring trustworthiness. The third and final contribution of the thesis is a development framework with a comprehensive set of tools supporting the spectrum of formal development activity from modeling to deployment. The proposed approach has been applied to several case studies in the domains of component-based development and safety-critical systems. The experience from the case studies confirms that the approach is suitable for developing large and complex trustworthy systems

    An overview of fault tree analysis and its application in model based dependability analysis

    Get PDF
    YesFault Tree Analysis (FTA) is a well-established and well-understood technique, widely used for dependability evaluation of a wide range of systems. Although many extensions of fault trees have been proposed, they suffer from a variety of shortcomings. In particular, even where software tool support exists, these analyses require a lot of manual effort. Over the past two decades, research has focused on simplifying dependability analysis by looking at how we can synthesise dependability information from system models automatically. This has led to the field of model-based dependability analysis (MBDA). Different tools and techniques have been developed as part of MBDA to automate the generation of dependability analysis artefacts such as fault trees. Firstly, this paper reviews the standard fault tree with its limitations. Secondly, different extensions of standard fault trees are reviewed. Thirdly, this paper reviews a number of prominent MBDA techniques where fault trees are used as a means for system dependability analysis and provides an insight into their working mechanism, applicability, strengths and challenges. Finally, the future outlook for MBDA is outlined, which includes the prospect of developing expert and intelligent systems for dependability analysis of complex open systems under the conditions of uncertainty

    Integrated timing verification for distributed embedded real-time systems

    Get PDF
    More and more parts of our lives are controlled by software systems that are usually not recognised as such. This is due to the fact that they are embedded in non-computer systems, like washing machines or cars. A modern car, for example, is controlled by up to 80 electronic control units (ECU). Most of these ECUs do not just have to fulfil functional correctness requirements but also have to execute a control action within a given time bound. An airbag, for example, does not work correctly if it is triggered a single second too late. These so-called real-time properties have to be verified for safety-critical systems as well as for non-safety-critical real-time systems. The growing distribution of functions over several ECUs increases the amount of complex dependencies in the entire automotive system. Therefore, an integrated approach for timing verification on all development levels (System, ECU, Software, etc.) and in all development phases is necessary. Today's most often used timing analysis method - the timing measurement of a system under test - is insufficient in many respects. First of all, it is very unlikely to find the actual worst-case response times this way. Furthermore, only the consequences of time consumption can thus be detected but not the potentially very complex causes for the consumption itself. The complexity of timing behaviour is one reason for the often late and thus expensive detection of timing problems in the development process. In contrast to measurement with the mentioned drawbacks, there is the static timing verification which exists since many years and is applicable with commercial tools. This thesis studies the current problems of industrial applicability of the static timing analysis (effort, imprecision, over-estimation, etc.) and solves them by process integration and the development of new analysis methods. In order to show the real benefit of the proposed methods, the approach will be demonstrated using an industrial example at every development stage.Unser tĂ€gliches Leben wird immer stĂ€rker von Software-Systemen durchdrungen, die oftmals nicht als solche wahrgenommen werden, da sie in Nicht-Computer-Systeme (Waschmaschinen, Autos, usw.) eingebettet sind. So arbeiten in einem aktuellen PKW bis zu 80 SteuergerĂ€te. Diese mĂŒssen in vielen FĂ€llen nicht nur funktional korrekt arbeiten, sondern eine geforderte Berechnung auch innerhalb vorgegebener Zeitschranken ausfĂŒhren. Ein Airbag erfĂŒllt seine Aufgabe beispielsweise nicht, wenn er auch nur eine Sekunde zu spĂ€t ausgelöst wird. Die so genannten Echtzeiteigenschaften mĂŒssen fĂŒr sicherheitskritische Anwendungen und soweit wie möglich auch fĂŒr alle anderen Echtzeitsysteme, abgesichert werden. Insbesondere sorgt die steigende Verteilung von Funktionen ĂŒber mehrere SteuergerĂ€te hinweg zunehmend fĂŒr komplexe AbhĂ€ngigkeiten im gesamten Fahrzeugsystem. Dies macht eine im Entwicklungsprozess und auf allen Abstraktionsebenen der Entwicklung (System, SteuergerĂ€te, Software, usw.) durchgĂ€ngige Methodik der Zeitverifikation notwendig. Das heute ĂŒbliche Verfahren der Zeitmessung von Systemen wĂ€hrend der TestdurchfĂŒhrung ist in vielerlei Hinsicht ungenĂŒgend. Zum einen werden die tatsĂ€chlichen Grenzwerte nur mit sehr geringer Wahrscheinlichkeit erreicht. Zum anderen werden auf diese Weise nur die Auswirkungen von ZeitverbrĂ€uchen gemessen, nicht aber deren Ursachen analysiert, die möglicherweise sehr komplex sein können. Dies fĂŒhrt auch dazu, dass Probleme erst spĂ€t im Entwicklungsprozess erkannt und folglich nur mit hohen Kosten behoben werden können. Neben den Zeitmessungen mit den genannten Nachteilen gibt es die statische Zeitverifikation. Diese ist bereits seit vielen Jahren bekannt und auch ĂŒber entsprechende Werkzeuge einsetzbar. In der vorliegenden Dissertation werden die Probleme der industriellen Anwendbarkeit der statischen Zeitverifikation (Aufwand, Ungenauigkeit, ÜberschĂ€tzung, usw.) untersucht und mit einer durchgĂ€ngigen Prozessintegration sowie der Entwicklung neuer Analyse-Methoden gelöst. Der hier vorgestellte Ansatz wird deshalb in jedem Schritt mit einem Beispiel aus der Industrie dargestellt und geprĂŒft

    A service-oriented middleware for integrated management of crowdsourced and sensor data streams in disaster management

    Get PDF
    The increasing number of sensors used in diverse applications has provided a massive number of continuous, unbounded, rapid data and requires the management of distinct protocols, interfaces and intermittent connections. As traditional sensor networks are error-prone and difficult to maintain, the study highlights the emerging role of “citizens as sensors” as a complementary data source to increase public awareness. To this end, an interoperable, reusable middleware for managing spatial, temporal, and thematic data using Sensor Web Enablement initiative services and a processing engine was designed, implemented, and deployed. The study found that its approach provided effective sensor data-stream access, publication, and filtering in dynamic scenarios such as disaster management, as well as it enables batch and stream management integration. Also, an interoperability analytics testing of a flood citizen observatory highlighted even variable data such as those provided by the crowd can be integrated with sensor data stream. Our approach, thus, offers a mean to improve near-real-time applications

    Design-time performance analysis of component-based real-time systems

    Get PDF
    In current real-time systems, performance metrics are one of the most challenging properties to specify, predict and measure. Performance properties depend on various factors, like environmental context, load profile, middleware, operating system, hardware platform and sharing of internal resources. Performance failures and not satisfying related requirements cause delays, cost overruns, and even abandonment of projects. In order to avoid these performancerelated project failures, the performance properties should be obtained and analyzed already at the early design phase of a project. In this thesis we employ principles of component-based software engineering (CBSE), which enable building software systems from individual components. The advantage of CBSE is that individual components can be modeled, reused and traded. The main objective of this thesis is to develop a method that enables to predict the performance properties of a system, based on the performance properties of the involved individual components. The prediction method serves rapid prototyping and performance analysis of the architecture or related alternatives, without performing the usual testing and implementation stages. The involved research questions are as follows. How should the behaviour and performance properties of individual components be specified in order to enable automated composition of these properties into an analyzable model of a complete system? How to synthesize the models of individual components into a model of a complete system in an automated way, such that the resulting system model can be analyzed against the performance properties? The thesis presents a new framework called DeepCompass, which realizes the concept of predictable assembly throughout all phases of the system design. The cornerstones of the framework are the composable models of individual software components and hardware blocks. The models are specified at the component development time and shipped in a component package. At the component composition phase, the models of the constituent components are synthesized into an executable system model. Since the thesis focuses on performance properties, we introduce performance-related types of component models, such as behaviour, performance and resource models. The dynamics of the system execution are captured in scenario models. The essential advantage of the introduced models is that, through the behaviour of individual components and scenario models, the behaviour of the complete system is synthesized in the executable system model. Further simulation-based analysis of the obtained executable system model provides application-specific and system-specific performance property values. To support the performance analysis, we have developed a CARAT software toolkit that provides and automates the algorithms for model synthesis and simulation. Besides this, the toolkit provides graphical tools for designing alternative architectures and visualization of obtained performance properties. We have conducted an empirical case study on the use of scenarios in the industry to analyze the system performance at the early design phase. It was found that industrial architects make extensive use of scenarios for performance evaluation. Based on the inputs of the architects, we have provided a set of guidelines for identification and use of performance-critical scenarios. At the end of this thesis, we have validated the DeepCompass framework by performing three case studies on performance prediction of real-time systems: an MPEG-4 video decoder, a Car Radio Navigation system and a JPEG application. For each case study, we have constructed models of the individual components, defined the SW/HW architecture, and used the CARAT toolkit to synthesize and simulate the executable system model. The simulation provided the predicted performance properties, which we later compared with the actual performance properties of the realized systems. With respect to resource usage properties and average task latencies, the variation of the prediction error showed to be within 30% of the actual performance. Concerning the pick loads on the processor nodes, the actual values were sometimes three times larger than the predicted values. As a conclusion, the framework has proven to be effective in rapid architecture prototyping and performance analysis of a complete system. This is valid, as in the case studies we have spent not more than 4-5 days on the average for the complete iteration cycle, including the design of several architecture alternatives. The framework can handle different architectural styles, which makes it widely applicable. A conceptual limitation of the framework is that it assumes that the models of individual components are already available at the design phase

    Optimizing the selection of architecture for component-based system

    Get PDF
    Redundant components are commonly used for solving Redundancy Allocation Problems (RAP) and improving the reliability of complex systems. However, the use of such a strategy to minimize development costs while maintaining high quality attributes for building software architecture is a research challenge. The selection for an optimal architecture to meet this challenge is an inherently complex task due to the high volume of possible architectural candidates and the fundamental conflict between quality attributes. Current software evaluation methods focus on predicting the quality attributes and selecting Commercial-Off-the Shelf (COTS) components for COTS-Based applications rather than utilizing additional architectural evaluation methods that could increase the opportunity for obtaining a cost-effective solution for RAP. In this thesis, an architecture-based approach called Cost-Discount and Build-or-Buy for RAP (CD/BoB-RAP) is introduced to support the decision making for selecting the architecture with optimal components and level of redundancy that satisfies the technical and financial preferences. This approach consists of an optimization model that includes two architectural evaluation methods (CD-RAP and BoB-RAP) and applies three variants of Particle Swarm Optimization (PSO) algorithms. Statistical results showed a 74% reduction on the development cost using CD-RAP on an embedded system case study. Moreover, the application of a maximum possible improvement on the algorithms showed that Penalty Guided PSO (PG-PSO) had enhanced the quality of obtained solutions by 70% to 84% in comparison to other algorithms. The results of the CD-RAP and BoB-RAP were superior when compared to the results obtained from similar approaches. The overall results of this research have proven the potential benefits of the CD/BoB-RAP approach for software architecture evaluation, particularly, in selecting software architecture for minimizing the development cost maintaining a highly reliable system
    corecore