2,466 research outputs found

    Phased mission analysis using the cause–consequence diagram method

    Get PDF
    Most reliability analysis techniques and tools assume that a system used for a mission consists of a single phase. However, multiple phases are natural in many missions. A system that can be modelled as a mission consisting of a sequence of phases is called a phased mission system. In this case, for successful completion of each phase the system may have to meet different requirements. System failure during any phase will result in mission failure. Fault tree analysis, binary decision diagrams and Markov techniques have been used to model phased missions. The cause–consequence diagram method is an alternative technique capable of modelling all system outcomes (success and failure) in one logic diagram. [Continues.

    A contribution to the evaluation and optimization of networks reliability

    Get PDF
    L’évaluation de la fiabilité des réseaux est un problème combinatoire très complexe qui nécessite des moyens de calcul très puissants. Plusieurs méthodes ont été proposées dans la littérature pour apporter des solutions. Certaines ont été programmées dont notamment les méthodes d’énumération des ensembles minimaux et la factorisation, et d’autres sont restées à l’état de simples théories. Cette thèse traite le cas de l’évaluation et l’optimisation de la fiabilité des réseaux. Plusieurs problèmes ont été abordés dont notamment la mise au point d’une méthodologie pour la modélisation des réseaux en vue de l’évaluation de leur fiabilités. Cette méthodologie a été validée dans le cadre d’un réseau de radio communication étendu implanté récemment pour couvrir les besoins de toute la province québécoise. Plusieurs algorithmes ont aussi été établis pour générer les chemins et les coupes minimales pour un réseau donné. La génération des chemins et des coupes constitue une contribution importante dans le processus d’évaluation et d’optimisation de la fiabilité. Ces algorithmes ont permis de traiter de manière rapide et efficace plusieurs réseaux tests ainsi que le réseau de radio communication provincial. Ils ont été par la suite exploités pour évaluer la fiabilité grâce à une méthode basée sur les diagrammes de décision binaire. Plusieurs contributions théoriques ont aussi permis de mettre en place une solution exacte de la fiabilité des réseaux stochastiques imparfaits dans le cadre des méthodes de factorisation. A partir de cette recherche plusieurs outils ont été programmés pour évaluer et optimiser la fiabilité des réseaux. Les résultats obtenus montrent clairement un gain significatif en temps d’exécution et en espace de mémoire utilisé par rapport à beaucoup d’autres implémentations. Mots-clés: Fiabilité, réseaux, optimisation, diagrammes de décision binaire, ensembles des chemins et coupes minimales, algorithmes, indicateur de Birnbaum, systèmes de radio télécommunication, programmes.Efficient computation of systems reliability is required in many sensitive networks. Despite the increased efficiency of computers and the proliferation of algorithms, the problem of finding good and quickly solutions in the case of large systems remains open. Recently, efficient computation techniques have been recognized as significant advances to solve the problem during a reasonable period of time. However, they are applicable to a special category of networks and more efforts still necessary to generalize a unified method giving exact solution. Assessing the reliability of networks is a very complex combinatorial problem which requires powerful computing resources. Several methods have been proposed in the literature. Some have been implemented including minimal sets enumeration and factoring methods, and others remained as simple theories. This thesis treats the case of networks reliability evaluation and optimization. Several issues were discussed including the development of a methodology for modeling networks and evaluating their reliabilities. This methodology was validated as part of a radio communication network project. In this work, some algorithms have been developed to generate minimal paths and cuts for a given network. The generation of paths and cuts is an important contribution in the process of networks reliability and optimization. These algorithms have been subsequently used to assess reliability by a method based on binary decision diagrams. Several theoretical contributions have been proposed and helped to establish an exact solution of the stochastic networks reliability in which edges and nodes are subject to failure using factoring decomposition theorem. From this research activity, several tools have been implemented and results clearly show a significant gain in time execution and memory space used by comparison to many other implementations. Key-words: Reliability, Networks, optimization, binary decision diagrams, minimal paths set and cuts set, algorithms, Birnbaum performance index, Networks, radio-telecommunication systems, programs

    Method and system for dynamic probabilistic risk assessment

    Get PDF
    The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage

    Efficient basic event orderings for binary decision diagrams

    Get PDF
    Over the last five years significant advances have been made in methodologies to analyse the fault tree diagram. The most successful of these developments has been the Binary Decision Diagram (BDD) approach. The Binary Decision Diagram approach has been shown to improve both the efficiency of determining the minimal cut sets of the fault tree ancl also the accuracy of the calculation procedure used to determine the top event parameters. The BDD technique povides a potential alternative to the traditional approaches based on Kinetic Tree Theory. To utilise the Binary Decision Diagram approach the fault tree structure is first converted to the BDD format. This conversion can be accomplished efficiently but requires the basic events in the fault tree to be placed in an ordering. A poor ordering can result in a Binary Decision Diagram which is not an efficient representation of the fault tree logic structure. The advantages to be gained by utilising the BDD technique rely on the efficiency of the ordering scheme. Alternative ordering schemes have been investigated and no one scheme is appropriate for every tree structure. Research to date has not found any rule based means of determining the best way of ordering basic events for a given fault tree structure. The work presented in this paper takes a machine learning approach based on Genetic Algorithms to select the most appropriate ordering scheme. Features which describe a fault tree structure have been identified and these provide the inputs to the machine learning algorithm. A set of possible ordering schemes has been selected based on previous heuristic work. The objective of the work detailed in the pap:r is to predict the most efficient of the possible ordering alternatives from parameters which describe a fault tree structure

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Analysis of safety systems with on-demand and dynamic failure modes

    Get PDF
    An approach for the reliability analysis of systems with on demand, and dynamic failure modes is presented. Safety systems such as sprinkler systems, or other protection systems are characterized by such failure behavior. They have support subsystems to start up the system on demand, and once they start running, they are prone to dynamic failure. Failure on demand requires an availability analysis of components (typically electromechanical components) which are required to start or support the safety system. Once the safety system is started, it is often reasonable to assume that these support components do not fail while running. Further, these support components may be tested and maintained periodically while not in active use. Dynamic failure refers to the failure while running (once started) of the active components of the safety system. These active components may be fault tolerant and utilize spares or other forms of redundancy, but are not maintainable while in use. In this paper we describe a simple yet powerful approach to combining the availability analysis of the static components with a reliability analysis of the dynamic components. This approach is explained using a hypothetical example sprinkler system, and applied to a water deluge system taken from the offshore industry. The approach is implemented in the fault tree analysis software package, Galile

    Building Enterprise Transition Plans Through the Development of Collapsing Design Structure Matrices

    Get PDF
    The United States Air Force (USAF), like many other large enterprises, has evolved over time, expanded its capabilities and has developed focused, yet often redundant, operational silos, functions and information systems (IS). Recent failures in enterprise integration efforts herald a need for a new method that can account for the challenges presented by decades of increases in enterprise complexity, redundancy and Operations and Maintenance (O&M) costs. Product or system-level research has dominated the study of traditional Design Structure Matrices (DSMs) with minimal coverage on enterprise-level issues. This research proposes a new method of collapsing DSMs (C-DSMs) to illustrate and mitigate the problem of enterprise IS redundancy while developing a systems integration plan. Through the use of iterative user constraints and controls, the C-DSM method employs an algorithmic and unbiased approach that automates the creation of a systems integration plan that provides not only a roadmap for complexity reduction, but also cost estimates for milestone evaluation. Inspired by a recent large IS integration program, an example C-DSM of 100 interrelated legacy systems was created. The C-DSM method indicates that if a slow path to integration is selected then cost savings are estimated to surpass integration costs after several iterations
    • …
    corecore