130 research outputs found
Fault Tree Analysis: a survey of the state-of-the-art in modeling, analysis and tools
Fault tree analysis (FTA) is a very prominent method to analyze the risks related to safety and economically critical assets, like power plants, airplanes, data centers and web shops. FTA methods comprise of a wide variety of modelling and analysis techniques, supported by a wide range of software tools. This paper surveys over 150 papers on fault tree analysis, providing an in-depth overview of the state-of-the-art in FTA. Concretely, we review standard fault trees, as well as extensions such as dynamic FT, repairable FT, and extended FT. For these models, we review both qualitative analysis methods, like cut sets and common cause failures, and quantitative techniques, including a wide variety of stochastic methods to compute failure probabilities. Numerous examples illustrate the various approaches, and tables present a quick overview of results
Systematic Representation of Relationship Quality in Conflict and Dispute: for Construction Projects
The construction industry needs to move towards more relational procurement procedures to reduce extensive losses of value and avoid conflicts and disputes. Despite this, the actual conceptualization and assessment of relationships during conflict and dispute incidents seem to be neglected. Via a review of literature, relationship quality is suggested as a systematic framework for construction projects. General system theory is applied and a framework consistent of four layers respectively labelled as triggering, antecedent, moderation and outcome is suggested. Two different case studies are undertaken to represent the systematic framework; which verifies that changes in contracting circumstances and built environment culture can affect the identified layers.Through system reliability theories a fault tree is derived to represent a systematic framework of relationship quality. The combinations of components, causes, and events for two case studies are mapped out through fault tree. By analysing the fault tree the combination of events that lead to relationship deterioration may be identified. Consequently the progression of simple events into failure is formulized and probabilities allocated. Accordingly the importance and the contribution of these events to failure become accessible. The ability to have such indications about relationship quality may help increase performance as well as sustainable procurement. Paper Type: Research articl
Recommended from our members
Dynamic Fault Tree Analysis: State-of-the-Art in Modeling, Analysis, and Tools
YesSafety and reliability are two important aspects of dependability that are needed to be rigorously evaluated throughout the development life-cycle of a system. Over the years, several methodologies have been developed for the analysis of failure behavior of systems. Fault tree analysis (FTA) is one of the well-established and widely used methods for safety and reliability engineering of systems. Fault tree, in its classical static form, is inadequate for modeling dynamic interactions between components and is unable to include temporal and statistical dependencies in the model. Several attempts have been made to alleviate the aforementioned limitations of static fault trees (SFT). Dynamic fault trees (DFT) were introduced to enhance the modeling power of its static counterpart. In DFT, the expressiveness of fault tree was improved by introducing new dynamic gates. While the introduction of the dynamic gates helps to overcome many limitations of SFT and allows to analyze a wide range of complex systems, it brings some overhead with it. One such overhead is that the existing combinatorial approaches used for qualitative and quantitative analysis of SFTs are no longer applicable to DFTs. This leads to several successful attempts for developing new approaches for DFT analysis. The methodologies used so far for DFT analysis include, but not limited to, algebraic solution, Markov models, Petri Nets, Bayesian Networks, and Monte Carlo simulation. To illustrate the usefulness of modeling capability of DFTs, many benchmark studies have been performed in different industries. Moreover, software tools are developed to aid in the DFT analysis process. Firstly, in this chapter, we provided a brief description of the DFT methodology. Secondly, this chapter reviews a number of prominent DFT analysis techniques such as Markov chains, Petri Nets, Bayesian networks, algebraic approach; and provides insight into their working mechanism, applicability, strengths, and challenges. These reviewed techniques covered both qualitative and quantitative analysis of DFTs. Thirdly, we discussed the emerging trends in machine learning based approaches to DFT analysis. Fourthly, the research performed for sensitivity analysis in DFTs has been reviewed. Finally, we provided some potential future research directions for DFT-based safety and reliability analysis
Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 1
Background material and a systems analysis of a multifrequency aperture - synthesizing microwave radiometer system is presented. It was found that the system does not exhibit high performance because much of the available thermal power is not used in the construction of the image and because the image that can be formed has a resolution of only ten lines. An analysis of image reconstruction is given. The system is compared with conventional aperture synthesis systems
Automatic phased mission system reliability model generation
There are many methods for modelling the reliability of systems based on component failure
data. This task becomes more complex as systems increase in size, or undertake missions
that comprise multiple discrete modes of operation, or phases. Existing techniques require
certain levels of expertise in the model generation and calculation processes, meaning that
risk and reliability assessments of systems can often be expensive and time-consuming.
This is exacerbated as system complexity increases.
This thesis presents a novel method which generates reliability models for phasedmission systems, based on Petri nets, from simple input files. The process has been
automated with a piece of software designed for engineers with little or no experience
in the field of risk and reliability. The software can generate models for both repairable
and non-repairable systems, allowing redundant components and maintenance cycles to be
included in the model.
Further, the software includes a simulator for the generated models. This allows a user
with simple input files to perform automatic model generation and simulation with a single
piece of software, yielding detailed failure data on components, phases, missions and the
overall system. A system can also be simulated across multiple consecutive missions. To
assess performance, the software is compared with an analytical approach and found to
match within 5% in both the repairable and non-repairable cases.
The software documented in this thesis could serve as an aid to engineers designing
new systems to validate the reliability of the system. This would not require specialist
consultants or additional software, ensuring that the analysis provides results in a timely
and cost-effective manner
A simulation approach to modelling quality and reliability features of plant processes
The relationship between component and system reliability is a key factor in the improvement of plant processes and a wide variety of models have been studied, under the general headings of “Probabilistic Methods”, “Graph Theoretical Methods” and “Simulation”. An outline review of these reliability models is given as a background to the work of the thesis and the ideas were used to steer the design of the software tool, which we have developed. The tool is generic in the sense that it can be used for any production system consisting of any number of parallel production lines, although we have considered its application in detail for one system only. In particular, we describe an application of reliability theory in the modelling of a plant process, which incorporates examples of Load-Sharing, parallel and series stages and we demonstrate how the production plannmg control is related to reliability considerations.
The tool has been tested in reference to a real production system, for which Quality and Reliability features have been analysed though data collection and simulation. The production system is located in Intel’s ESSM (European Site for System Manufacturmg) plant m Ireland. The plant's products are the basic components of a Pentium II processor, based on a new technology, (known as MMX or Secc), which enables enhancements for multimedia and communication applications. We have also applied our software tool to the old production line (pre-datmg Secc Technology), both for calibration purposes and to compare the two lines Software features mclude the ability to, mvestigate line reaction to changes in quality and reliability, to pmpomt problem areas, to cost failures in reliability, to explore degraded operation, stages with poor quality/reliability can be identified and Estimate the real UPH (Units Per Hour). We present an analysis of system performance and provide recommendations for possible improvements to the system
Recommended from our members
Event group importance measures for top event frequency analyses
Three traditional importance measures, risk reduction, partial derivative, nd variance reduction, have been extended to permit analyses of the relative importance of groups of underlying failure rates to the frequencies of resulting top events. The partial derivative importance measure was extended by assessing the contribution of a group of events to the gradient of the top event frequency. Given the moments of the distributions that characterize the uncertainties in the underlying failure rates, the expectation values of the top event frequency, its variance, and all of the new group importance measures can be quantified exactly for two familiar cases: (1) when all underlying failure rates are presumed independent, and (2) when pairs of failure rates based on common data are treated as being equal (totally correlated). In these cases, the new importance measures, which can also be applied to assess the importance of individual events, obviate the need for Monte Carlo sampling. The event group importance measures are illustrated using a small example problem and demonstrated by applications made as part of a major reactor facility risk assessment. These illustrations and applications indicate both the utility and the versatility of the event group importance measures
- …