6,991 research outputs found
Advancing Dynamic Fault Tree Analysis
This paper presents a new state space generation approach for dynamic fault
trees (DFTs) together with a technique to synthesise failures rates in DFTs.
Our state space generation technique aggressively exploits the DFT structure
--- detecting symmetries, spurious non-determinism, and don't cares. Benchmarks
show a gain of more than two orders of magnitude in terms of state space
generation and analysis time. Our approach supports DFTs with symbolic failure
rates and is complemented by parameter synthesis. This enables determining the
maximal tolerable failure rate of a system component while ensuring that the
mean time of failure stays below a threshold
A synthesis of logic and bio-inspired techniques in the design of dependable systems
Much of the development of model-based design and dependability analysis in the design of dependable systems, including software intensive systems, can be attributed to the application of advances in formal logic and its application to fault forecasting and verification of systems. In parallel, work on bio-inspired technologies has shown potential for the evolutionary design of engineering systems via automated exploration of potentially large design spaces. We have not yet seen the emergence of a design paradigm that effectively combines these two techniques, schematically founded on the two pillars of formal logic and biology, from the early stages of, and throughout, the design lifecycle. Such a design paradigm would apply these techniques synergistically and systematically to enable optimal refinement of new designs which can be driven effectively by dependability requirements. The paper sketches such a model-centric paradigm for the design of dependable systems, presented in the scope of the HiP-HOPS tool and technique, that brings these technologies together to realise their combined potential benefits. The paper begins by identifying current challenges in model-based safety assessment and then overviews the use of meta-heuristics at various stages of the design lifecycle covering topics that span from allocation of dependability requirements, through dependability analysis, to multi-objective optimisation of system architectures and maintenance schedules
Simulating chemistry efficiently on fault-tolerant quantum computers
Quantum computers can in principle simulate quantum physics exponentially
faster than their classical counterparts, but some technical hurdles remain.
Here we consider methods to make proposed chemical simulation algorithms
computationally fast on fault-tolerant quantum computers in the circuit model.
Fault tolerance constrains the choice of available gates, so that arbitrary
gates required for a simulation algorithm must be constructed from sequences of
fundamental operations. We examine techniques for constructing arbitrary gates
which perform substantially faster than circuits based on the conventional
Solovay-Kitaev algorithm [C.M. Dawson and M.A. Nielsen, \emph{Quantum Inf.
Comput.}, \textbf{6}:81, 2006]. For a given approximation error ,
arbitrary single-qubit gates can be produced fault-tolerantly and using a
limited set of gates in time which is or ; with sufficient parallel preparation of ancillas, constant average
depth is possible using a method we call programmable ancilla rotations.
Moreover, we construct and analyze efficient implementations of first- and
second-quantized simulation algorithms using the fault-tolerant arbitrary gates
and other techniques, such as implementing various subroutines in constant
time. A specific example we analyze is the ground-state energy calculation for
Lithium hydride.Comment: 33 pages, 18 figure
Using reliability analysis to support decision making in phased mission systems
Due to the environments in which they will operate, future autonomous systems must be capable of reconfiguring quickly and safely following faults or environmental changes. Past research has shown how, by considering autonomous systems to perform phased missions, reliability analysis can support decision making by allowing comparison of the probability of success of different missions following reconfiguration. Binary Decision Diagrams (BDDs) offer fast, accurate reliability analysis that could contribute to real-time decision making. However, phased mission analysis using existing BDD models is too slow to contribute to the instant decisions needed in time-critical situations. This paper investigates two aspects of BDD models that affect analysis speed: variable ordering and quantification efficiency. Variable ordering affects BDD size, which directly affects analysis speed. Here, a new ordering scheme is proposed for use in the context of a decision making process. Variables are ordered before a mission and reordering is unnecessary no matter how the mission configuration changes. Three BDD models are proposed to address the efficiency and accuracy of existing models. The advantages of the developed ordering scheme and BDD models are demonstrated in the context of their application within a reliability analysis methodology used to support decision making in an Unmanned Aerial Vehicle
Reliability Analysis Of Low-Frequency Ac Transmission System Topology Of Offshore Wind Power Plants
Many countries and regions of the world are planning to reduce the energy sector\u27s carbon footprint and increase sustainable energy sources. To this end, wind power has become one of their primary renewable energy sources. However, wind power\u27s significant challenges relate to the need for long transmission lines that connect the offshore wind power plants to the onshore grid. The three major transmission configurations and design topologies of High Voltage AC (HVAC) Transmission, High Voltage DC (HVDC) Transmission, and Low-Frequency AC (LFAC) Transmission for offshore wind power resources have been thoroughly discussed both in industry and academia. HVAC is the standard transmission system for short and long distances. In contrast, HVDC is a popular solution for the long-distance transmission of offshore wind power generators. In recent years, LFAC transmission topology at 20Hz has become an alternative solution to HVAC and HVDC transmission systems. The significant advantages of LFAC transmission are the substantial increment of transmissible power over traditional AC transmission systems and the elimination of offshore converter stations. The absence of an offshore converter system renders LFAC transmission less costly compare to the HVDC system. The efficient design and reliability of offshore wind power transmission topologies are essential requirements for the transmission grid\u27s smooth operation. This thesis work extensively investigated and reviewed the LFAC transmission topologies over HVAC and HVDC transmissions topologies of offshore wind power plans. Different methods are used to assess the reliability performance of system designs. In this research, the state of the art of the simulation models for three transmission systems have been developed for reliability analysis of the above three transmission systems topologies using Fault tree analysis (FTA). This research has identified several reliability performance characteristics including minimal cut sets, importance measures, and time-based matrics (i.e, number of failures and mean unavailability) of the transmission systems, and compared these characteristics among three transmission systems. For reliability performance analysis, the time-base metrics, such as mean-unavailability and number of failures of the systems over 10,000 hours of operation, importance measures, or reliability importance measures, such as Critical Importance Measure (CIM) and Risk Reduction Worth (RRW), and Cut Sets have been calculated. The thesis has successfully identified major fault events for all the three transmission systems, and that the large switch is the most critical piece of equipment in the HVAC system, while the AC/DC or DC/AC converter is the most critical piece of equipment in the HVDC system, and the DC/AC converter and Cycloconverter are the most critical components in the LFAC transmission system. Furthermore, to enhance the offshore transmission systems reliability and ensure their smooth operation, effective and reliable offshore wind power generation predictions are critical. To this end, this research work also introduces the necessary offshore wind power forecasting tools
Binary decision diagrams for fault tree analysis
This thesis develops a new approach to fault tree analysis, namely the Binary Decision
Diagram (BDD) method. Conventional qualitative fault tree analysis techniques such
as the "top-down" or "bottom-up" approaches are now so well developed that further
refinement is unlikely to result in vast improvements in terms of their computational
capability. The BDD method has exhibited potential gains to be made in terms of
speed and efficiency in determining the minimal cut sets. Further, the nature of the
binary decision diagram is such that it is more suited to Boolean manipulation. The
BDD method has been programmed and successfully applied to a number of
benchmark fault trees.
The analysis capabilities of the technique have been extended such that all quantitative
fault tree top event parameters, which can be determined by conventional Kinetic Tree
Theory, can now be derived directly from the BDD. Parameters such as the top event
probability, frequency of occurrence and expected number of occurrences can be
calculated exactly using this method, removing the need for the approximations
previously required.
Thus the BDD method is proven to have advantages in terms of both accuracy and
efficiency. Initiator/enabler event analysis and importance measures have been
incorporated to extend this method into a full analysis procedure
Fault Tree Analysis: a survey of the state-of-the-art in modeling, analysis and tools
Fault tree analysis (FTA) is a very prominent method to analyze the risks related to safety and economically critical assets, like power plants, airplanes, data centers and web shops. FTA methods comprise of a wide variety of modelling and analysis techniques, supported by a wide range of software tools. This paper surveys over 150 papers on fault tree analysis, providing an in-depth overview of the state-of-the-art in FTA. Concretely, we review standard fault trees, as well as extensions such as dynamic FT, repairable FT, and extended FT. For these models, we review both qualitative analysis methods, like cut sets and common cause failures, and quantitative techniques, including a wide variety of stochastic methods to compute failure probabilities. Numerous examples illustrate the various approaches, and tables present a quick overview of results
High-Level Analysis of the Impact of Soft-Faults in Cyberphysical Systems
As digital systems grow in complexity and are used in a broader variety of safety-critical applications, there is an ever-increasing demand for assessing the dependability and safety of such systems, especially when subjected to hazardous environments. As a result, it is important to identify and correct any functional abnormalities and component faults as early as possible in order to minimize performance degradation and to avoid potential perilous situations. Existing techniques often lack the capacity to perform a comprehensive
and exhaustive analysis on complex redundant architectures, leading to less than optimal risk evaluation. Hence, an early analysis of dependability of such safety-critical applications enables designers to develop systems that meets high dependability requirements. Existing techniques in the field often lack the capacity to perform full system analyses due to state-explosion limitations (such as transistor and gate-level analyses), or due to the time and monetary costs attached to them (such as simulation, emulation, and physical testing).
In this work we develop a system-level methodology to model and analyze the effects of Single Event Upsets (SEUs) in cyberphysical system designs. The proposed methodology investigates the impacts of SEUs in the entire system model (fault tree level), including SEU propagation paths, logical masking of errors, vulnerability to specific events, and critical nodes. The methodology also provides insights on a system's weaknesses, such as the impact of each component to the system's vulnerability, as well as hidden sources of failure, such as latent faults. Moreover, the proposed methodology is able to identify and categorize the system's components in order of criticality, and to evaluate different approaches to the mitigation of such criticality (in the form of different configurations of TMR) in order to obtain the most efficient mitigation solution available.
The proposed methodology is also able to model and analyze system components individually (system component level), in order to more accurately estimate the component's vulnerability to SEUs. In this case, a more refined analysis of the component is conducted, which enables us to identify the source of the component's criticality. Thereafter, a second mitigation mechanic (internal to the component) takes place, in order to evaluate the gains and costs of applying different configurations of TMR to the component internally. Finally, our approach will draw a comparison between the results obtained at both levels of analysis in order to evaluate the most efficient way of improving the targeted system design
- …