66,506 research outputs found

    Model-based dependability analysis : state-of-the-art, challenges and future outlook

    Get PDF
    Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis

    A compositional method for reliability analysis of workflows affected by multiple failure modes

    Get PDF
    We focus on reliability analysis for systems designed as workflow based compositions of components. Components are characterized by their failure profiles, which take into account possible multiple failure modes. A compositional calculus is provided to evaluate the failure profile of a composite system, given failure profiles of the components. The calculus is described as a syntax-driven procedure that synthesizes a workflows failure profile. The method is viewed as a design-time aid that can help software engineers reason about systems reliability in the early stage of development. A simple case study is presented to illustrate the proposed approach

    A synthesis of logic and bio-inspired techniques in the design of dependable systems

    Get PDF
    Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules

    A synthesis of logic and biology in the design of dependable systems

    Get PDF
    The technologies of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, have advanced in recent years. Much of this development can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that combines effectively and throughout the design lifecycle these two techniques which are schematically founded on the two pillars of formal logic and biology. Such a design paradigm would apply these techniques synergistically and systematically from the early stages of design to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems that brings these technologies together to realise their combined potential benefits

    Enhancing the EAST-ADL error model with HiP-HOPS semantics

    Get PDF
    EAST-ADL is a domain-specific modelling language for the engineering of automotive embedded systems. The language has abstractions that enable engineers to capture a variety of information about design in the course of the lifecycle — from requirements to detailed design of hardware and software architectures. The specification of the EAST-ADL language includes an error model extension which documents language structures that allow potential failures of design elements to be specified locally. The effects of these failures are then later assessed in the context of the architecture design. To provide this type of useful assessment, a language and a specification are not enough; a compiler-like tool that can read and operate on a system specification together with its error model is needed. In this paper we integrate the error model of EAST-ADL with the precise semantics of HiP-HOPS — a state-of-the-art tool that enables dependability analysis and optimization of design models. We present the integration concept between EAST-ADL structure and HiP-HOPS error propagation logic and its transformation into the HiP-HOPS model. Source and destination models are represented using the corresponding XML formats. The connection of these two models at tool level enables practical EAST-ADL designs of embedded automotive systems to be analysed in terms of dependability, i.e. safety, reliability and availability. In addition, the information encoded in the error model can be re-used across different contexts of application with the associated benefits for cost reduction, simplification, and rationalisation of dependability assessments in complex engineering designs

    Use of COTS functional analysis software as an IVHM design tool for detection and isolation of UAV fuel system faults

    Get PDF
    This paper presents a new approach to the development of health management solutions which can be applied to both new and legacy platforms during the conceptual design phase. The approach involves the qualitative functional modelling of a system in order to perform an Integrated Vehicle Health Management (IVHM) design – the placement of sensors and the diagnostic rules to be used in interrogating their output. The qualitative functional analysis was chosen as a route for early assessment of failures in complex systems. Functional models of system components are required for capturing the available system knowledge used during various stages of system and IVHM design. MADe™ (Maintenance Aware Design environment), a COTS software tool developed by PHM Technology, was used for the health management design. A model has been built incorporating the failure diagrams of five failure modes for five different components of a UAV fuel system. Thus an inherent health management solution for the system and the optimised sensor set solution have been defined. The automatically generated sensor set solution also contains a diagnostic rule set, which was validated on the fuel rig for different operation modes taking into account the predicted fault detection/isolation and ambiguity group coefficients. It was concluded that when using functional modelling, the IVHM design and the actual system design cannot be done in isolation. The functional approach requires permanent input from the system designer and reliability engineers in order to construct a functional model that will qualitatively represent the real system. In other words, the physical insight should not be isolated from the failure phenomena and the diagnostic analysis tools should be able to adequately capture the experience bases. This approach has been verified on a laboratory bench top test rig which can simulate a range of possible fuel system faults. The rig is fully instrumented in order to allow benchmarking of various sensing solution for fault detection/isolation that were identified using functional analysis

    Uncertainty Analysis of the Adequacy Assessment Model of a Distributed Generation System

    Full text link
    Due to the inherent aleatory uncertainties in renewable generators, the reliability/adequacy assessments of distributed generation (DG) systems have been particularly focused on the probabilistic modeling of random behaviors, given sufficient informative data. However, another type of uncertainty (epistemic uncertainty) must be accounted for in the modeling, due to incomplete knowledge of the phenomena and imprecise evaluation of the related characteristic parameters. In circumstances of few informative data, this type of uncertainty calls for alternative methods of representation, propagation, analysis and interpretation. In this study, we make a first attempt to identify, model, and jointly propagate aleatory and epistemic uncertainties in the context of DG systems modeling for adequacy assessment. Probability and possibility distributions are used to model the aleatory and epistemic uncertainties, respectively. Evidence theory is used to incorporate the two uncertainties under a single framework. Based on the plausibility and belief functions of evidence theory, the hybrid propagation approach is introduced. A demonstration is given on a DG system adapted from the IEEE 34 nodes distribution test feeder. Compared to the pure probabilistic approach, it is shown that the hybrid propagation is capable of explicitly expressing the imprecision in the knowledge on the DG parameters into the final adequacy values assessed. It also effectively captures the growth of uncertainties with higher DG penetration levels

    Review of recent research towards power cable life cycle management

    Get PDF
    Power cables are integral to modern urban power transmission and distribution systems. For power cable asset managers worldwide, a major challenge is how to manage effectively the expensive and vast network of cables, many of which are approaching, or have past, their design life. This study provides an in-depth review of recent research and development in cable failure analysis, condition monitoring and diagnosis, life assessment methods, fault location, and optimisation of maintenance and replacement strategies. These topics are essential to cable life cycle management (LCM), which aims to maximise the operational value of cable assets and is now being implemented in many power utility companies. The review expands on material presented at the 2015 JiCable conference and incorporates other recent publications. The review concludes that the full potential of cable condition monitoring, condition and life assessment has not fully realised. It is proposed that a combination of physics-based life modelling and statistical approaches, giving consideration to practical condition monitoring results and insulation response to in-service stress factors and short term stresses, such as water ingress, mechanical damage and imperfections left from manufacturing and installation processes, will be key to success in improved LCM of the vast amount of cable assets around the world

    An Approach to Assess Solder Interconnect Degradation Using Digital Signal

    Get PDF
    Department of Human and Systems EngineeringDigital signals used in electronic systems require reliable data communication. It is necessary to monitor the system health continuously to prevent system failure in advance. Solder joints in electronic assemblies are one of the major failure sites under thermal, mechanical and chemical stress conditions during their operation. Solder joint degradation usually starts from the surface where high speed signals are concentrated due to the phenomenon referred to as the skin effect. Due to the skin effect, high speed signals are sensitive when detecting the early stages of solder joint degradation. The objective of the thesis is to assess solder joint degradation in a non-destructive way based on digital signal characterization. For accelerated life testing the stress conditions were designed in order to generate gradual degradation of solder joints. The signal generated by a digital signal transceiver was travelling through the solder joints to continuously monitor the signal integrity under the stress conditions. The signal properities were obtained by eye parameters and jitter, which represented the characteristics of the digital signal in terms of noise and timing error. The eye parameters and jitter exhibited significant increase after the exposure of the solder joints to the stress conditions. The test results indicated the deterioration of the signal integrity resulted from the solder joint degradation, and proved that high speed digital signals could serve as a non-destructive tool for sensing physical degradation. Since this approach is based on the digital signals used in electronic systems, it can be implemented without requiring additional sensing devices. Furthermore, this approach can serve as a proactive prognostic tool, which provides real-time health monitoring of electronic systems and triggers early warning for impending failure.ope

    Analysis of System-Failure Rate Caused by Soft-Errors using a UML-Based Systematic Methodology in an SoC

    Get PDF
    This paper proposes an analytical method to assess the soft-error rate (SER) in the early stages of a System-on-Chip (SoC) platform-based design methodology. The proposed method gets an executable UML (Unified Modeling Language) model of the SoC and the raw soft- error rate of different parts of the platform as its inputs. Soft-errors on the design are modeled by disturbances on the value of attributes in the classes of the UML model and disturbances on opcodes of software cores. The Dynamic behavior of each core is used to determine the propagation probability of each variable disturbance to the core outputs. Furthermore, the SER and the execution time of each core in the SoC and a Failure Modes and Effects Analysis (FMEA) that determines the severity of each failure mode in the SoC are used to compute the System-Failure Rate (SFR) of the So
    corecore