831 research outputs found

    Integrating model checking with HiP-HOPS in model-based safety analysis

    Get PDF
    The ability to perform an effective and robust safety analysis on the design of modern safety–critical systems is crucial. Model-based safety analysis (MBSA) has been introduced in recent years to support the assessment of complex system design by focusing on the system model as the central artefact, and by automating the synthesis and analysis of failure-extended models. Model checking and failure logic synthesis and analysis (FLSA) are two prominent MBSA paradigms. Extensive research has placed emphasis on the development of these techniques, but discussion on their integration remains limited. In this paper, we propose a technique in which model checking and Hierarchically Performed Hazard Origin and Propagation Studies (HiP-HOPS) – an advanced FLSA technique – can be applied synergistically with benefit for the MBSA process. The application of the technique is illustrated through an example of a brake-by-wire system

    Integrated application of compositional and behavioural safety analysis

    Get PDF
    To address challenges arising in the safety assessment of critical engineering systems, research has recently focused on automating the synthesis of predictive models of system failure from design representations. In one approach, known as compositional safety analysis, system failure models such as fault trees and Failure Modes and Effects Analyses (FMEAs) are constructed from component failure models using a process of composition. Another approach has looked into automating system safety analysis via application of formal verification techniques such as model checking on behavioural models of the system represented as state automata. So far, compositional safety analysis and formal verification have been developed separately and seen as two competing paradigms to the problem of model-based safety analysis. This thesis shows that it is possible to move forward the terms of this debate and use the two paradigms synergistically in the context of an advanced safety assessment process. The thesis develops a systematic approach in which compositional safety analysis provides the basis for the systematic construction and refinement of state-automata that record the transition of a system from normal to degraded and failed states. These state automata can be further enhanced and then be model-checked to verify the satisfaction of safety properties. Note that the development of such models in current practice is ad hoc and relies only on expert knowledge, but it being rationalised and systematised in the proposed approach – a key contribution of this thesis. Overall the approach combines the advantages of compositional safety analysis such as simplicity, efficiency and scalability, with the benefits of formal verification such as the ability for automated verification of safety requirements on dynamic models of the system, and leads to an improved model-based safety analysis process. In the context of this process, a novel generic mechanism is also proposed for modelling the detectability of errors which typically arise as a result of component faults and then propagate through the architecture. This mechanism is used to derive analyses that can aid decisions on appropriate detection and recovery mechanisms in the system model. The thesis starts with an investigation of the potential for useful integration of compositional and formal safety analysis techniques. The approach is then developed in detail and guidelines for analysis and refinement of system models are given. Finally, the process is evaluated in three cases studies that were iteratively performed on increasingly refined and improved models of aircraft and automotive braking and cruise control systems. In the light of the results of these studies, the thesis concludes that integration of compositional and formal safety analysis techniques is feasible and potentially useful in the design of safety critical systems

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact. Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases. Topics covered include: Safety Assessment, Reliability Analysis, Critical Systems and Applications, Functional Safety, Dependability Validation, Dependable Software Systems, Embedded Systems, System Certification

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Airborne Advanced Reconfigurable Computer System (ARCS)

    Get PDF
    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    A framework and methods for on-board network level fault diagnostics in automobiles

    Get PDF
    A significant number of electronic control units (ECUs) are nowadays networked in automotive vehicles to help achieve advanced vehicle control and eliminate bulky electrical wiring. This, however, inevitably leads to increased complexity in vehicle fault diagnostics. Traditional off-board fault diagnostics and repair at service centres, by using only diagnostic trouble codes logged by conventional onboard diagnostics, can become unwieldy especially when dealing with intermittent faults in complex networked electronic systems. This can result in inaccurate and time consuming diagnostics due to lack of real-time fault information of the interaction among ECUs in the network-wide perspective. This thesis proposes a new framework for on-board knowledge-based diagnostics focusing on network level faults, and presents an implementation of a real-time in-vehicle network diagnostic system, using case-based reasoning. A newly developed fault detection technique and the results from several practical experiments with the diagnostic system using a network simulation tool, a hardware- in-the- loop simulator, a disturbance simulator, simulated ECUs and real ECUs networked on a test rig are also presented. The results show that the new vehicle diagnostics scheme, based on the proposed new framework, can provide more real-time network level diagnostic data, and more detailed and self-explanatory diagnostic outcomes. This new system can provide increased diagnostic capability when compared with conventional diagnostic methods in terms of detecting message communication faults. In particular, the underlying incipient network problems that are ignored by the conventional on-board diagnostics are picked up for thorough fault diagnostics and prognostics which can be carried out by a whole-vehicle fault management system, contributing to the further development of intelligent and fault-tolerant vehicles

    Failure mode modular de-composition

    Get PDF

    A model-based reasoning architecture for system-level fault diagnosis

    Get PDF
    This dissertation presents a model-based reasoning architecture with a two fold purpose: to detect and classify component faults from observable system behavior, and to generate fault propagation models so as to make a more accurate estimation of current operational risks. It incorporates a novel approach to system level diagnostics by addressing the need to reason about low-level inaccessible components from observable high-level system behavior. In the field of complex system maintenance it can be invaluable as an aid to human operators. The first step is the compilation of the database of functional descriptions and associated fault-specific features for each of the system components. The system is then analyzed to extract structural information, which, in addition to the functional database, is used to create the structural and functional models. A fault-symptom matrix is constructed from the functional model and the features database. The fault threshold levels for these symptoms are founded on the nominal baseline data. Based on the fault-symptom matrix and these thresholds, a diagnostic decision tree is formulated in order to intelligently query about the system health. For each faulty candidate, a fault propagation tree is generated from the structural model. Finally, the overall system health status report includes both the faulty components and the associated at risk components, as predicted by the fault propagation model.Ph.D.Committee Chair: Vachtsevanos, George; Committee Member: Liang, Steven; Committee Member: Michaels, Thomas; Committee Member: Vela, Patricio; Committee Member: Wardi, Yora
    • …
    corecore