299,951 research outputs found

    Probabilistic Model-Based Safety Analysis

    Full text link
    Model-based safety analysis approaches aim at finding critical failure combinations by analysis of models of the whole system (i.e. software, hardware, failure modes and environment). The advantage of these methods compared to traditional approaches is that the analysis of the whole system gives more precise results. Only few model-based approaches have been applied to answer quantitative questions in safety analysis, often limited to analysis of specific failure propagation models, limited types of failure modes or without system dynamics and behavior, as direct quantitative analysis is uses large amounts of computing resources. New achievements in the domain of (probabilistic) model-checking now allow for overcoming this problem. This paper shows how functional models based on synchronous parallel semantics, which can be used for system design, implementation and qualitative safety analysis, can be directly re-used for (model-based) quantitative safety analysis. Accurate modeling of different types of probabilistic failure occurrence is shown as well as accurate interpretation of the results of the analysis. This allows for reliable and expressive assessment of the safety of a system in early design stages

    QuantUM: Quantitative Safety Analysis of UML Models

    Full text link
    When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.Comment: In Proceedings QAPL 2011, arXiv:1107.074

    Introducing the STAMP method in road tunnel safety assessment

    Get PDF
    After the tremendous accidents in European road tunnels over the past decade, many risk assessment methods have been proposed worldwide, most of them based on Quantitative Risk Assessment (QRA). Although QRAs are helpful to address physical aspects and facilities of tunnels, current approaches in the road tunnel field have limitations to model organizational aspects, software behavior and the adaptation of the tunnel system over time. This paper reviews the aforementioned limitations and highlights the need to enhance the safety assessment process of these critical infrastructures with a complementary approach that links the organizational factors to the operational and technical issues, analyze software behavior and models the dynamics of the tunnel system. To achieve this objective, this paper examines the scope for introducing a safety assessment method which is based on the systems thinking paradigm and draws upon the STAMP model. The method proposed is demonstrated through a case study of a tunnel ventilation system and the results show that it has the potential to identify scenarios that encompass both the technical system and the organizational structure. However, since the method does not provide quantitative estimations of risk, it is recommended to be used as a complementary approach to the traditional risk assessments rather than as an alternative. (C) 2012 Elsevier Ltd. All rights reserved

    Experimental approach to evaluate software reliability in hardware-software integrated environment

    Get PDF
    Reliability in safety-critical systems and equipment is of vital importance, so the probabilistic safety assessment (PSA) has been widely used for many years in the nuclear industry to address reliability in a quantitative manner. As many nuclear power plants (NPPs) become digitalized, evaluating the reliability of safety-critical software has become an emerging issue. Due to a lack of available methods, in many conventional PSA models only hardware reliability is addressed with the assumption that software reliability is perfect or very high compared to hardware reliability. This study focused on developing a new method of safety-critical software reliability quantification, derived from hardware-software integrated environment testing. Since the complexity of hardware and software interaction makes the possible number of test cases for exhaustive testing well beyond a practically achievable range, an importance-oriented testing method that assures the most efficient test coverage was developed. Application to the test of an actual NPP reactor protection system demonstrated the applicability of the developed method and provided insight into complex software-based system reliability. (C) 2020 Korean Nuclear Society, Published by Elsevier Korea LLC

    Probabilistic Risk Assessment of an Obstacle Detection System for GoA 4 Freight Trains

    Full text link
    In this paper, a quantitative risk assessment approach is discussed for the design of an obstacle detection function for low-speed freight trains with grade of automation (GoA)~4. In this 5-step approach, starting with single detection channels and ending with a three-out-of-three (3oo3) model constructed of three independent dual-channel modules and a voter, a probabilistic assessment is exemplified, using a combination of statistical methods and parametric stochastic model checking. It is illustrated that, under certain not unreasonable assumptions, the resulting hazard rate becomes acceptable for specific application settings. The statistical approach for assessing the residual risk of misclassifications in convolutional neural networks and conventional image processing software suggests that high confidence can be placed into the safety-critical obstacle detection function, even though its implementation involves realistic machine learning uncertainties

    Model-based dependability analysis : state-of-the-art, challenges and future outlook

    Get PDF
    Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis

    A synthesis of logic and bio-inspired techniques in the design of dependable systems

    Get PDF
    Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules

    Identifying Common Patterns and Unusual Dependencies in Faults, Failures and Fixes for Large-scale Safety-critical Software

    Get PDF
    As software evolves, becoming a more integral part of complex systems, modern society becomes more reliant on the proper functioning of such systems. However, the field of software quality assurance lacks detailed empirical studies from which best practices can be determined. The fundamental factors that contribute to software quality are faults, failures and fixes, and although some studies have considered specific aspects of each, comprehensive studies have been quite rare. Thus, the fact that we establish the cause-effect relationship between the fault(s) that caused individual failures, as well as the link to the fixes made to prevent the failures from (re)occurring appears to be a unique characteristic of our work. In particular, we analyze fault types, verification activities, severity levels, investigation effort, artifacts fixed, components fixed, and the effort required to implement fixes for a large industrial case study. The analysis includes descriptive statistics, statistical inference through formal hypothesis testing, and data mining. Some of the most interesting empirical results include (1) Contrary to popular belief, later life-cycle faults dominate as causes of failures. Furthermore, over 50% of high priority failures (e.g., post-release failures and safety-critical failures) were caused by coding faults. (2) 15% of failures led to fixes spread across multiple components and the spread was largely affected by the software architecture. (3) The amount of effort spent fixing faults associated with each failure was not uniformly distributed across failures; fixes with a greater spread across components and artifacts, required more effort. Overall, the work indicates that fault prevention and elimination efforts focused on later life cycle faults is essential as coding faults were the dominating cause of safety-critical failures and post-release failures. Further, statistical correlation and/or traditional data mining techniques show potential for assessment and prediction of the locations of fixes and the associated effort. By providing quantitative results and including statistical hypothesis testing, which is not yet a standard practice in software engineering, our work enriches the empirical knowledge needed to improve the state-of-the-art and practice in software quality assurance
    corecore