23 research outputs found

    Comparison of the Worst and Best Sum-of-Products Expressions for Multiple-Valued Functions

    Get PDF
    Because most practical logic design algorithms produce irredundant sum-of-products (ISOP) expressions, the understanding of ISOPs is crucial. We show a class of functions for which Morreale-Minato's ISOP generation algorithm produces worst ISOPs (WSOP), ISOPs with the most product terms. We show this class has the property that the ratio of the number of products in the WSOP to the number in the minimum ISOP (MSOP) is arbitrarily large when the number of variables is unbounded. The ramifications of this are significant; care must be exercised in designing algorithms that produce ISOPs. We also show that 2/sup n-1/ is a firm upper bound on the number of product terms in any ISOP for switching functions on n variables, answering a question that has been open for 30 years. We show experimental data and extend our results to functions of multiple-valued variables

    Polynomial-time algorithms for generation of prime implicants

    Get PDF
    AbstractA notion of a neighborhood cube of a term of a Boolean function represented in the canonical disjunctive normal form is introduced. A relation between neighborhood cubes and prime implicants of a Boolean function is established. Various aspects of the problem of prime implicants generation are identified and neighborhood cube-based algorithms for their solution are developed. The correctness of algorithms is proven and their time complexity is analyzed. It is shown that all presented algorithms are polynomial in the number of minterms occurring in the canonical disjunctive normal form representation of a Boolean function. A summary of the known approaches to the solution of the problem of the generation of prime implicants is also included

    Automatic Generation of Minimal Cut Sets

    Get PDF
    A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA) is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL) model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.Comment: In Proceedings ESSS 2015, arXiv:1506.0325

    Determination of prime implicants by differential evolution for the dynamic reliability analysis of non-coherent nuclear systems

    Get PDF
    open4We present an original computational method for the identification of prime implicants (PIs) in non-coherent structure functions of dynamic systems. This is a relevant problem for dynamic reliability analysis, when dynamic effects render inadequate the traditional methods of minimal cut-set identification. PIs identification is here transformed into an optimization problem, where we look for the minimum combination of implicants that guarantees the best coverage of all the minterms. For testing the method, an artificial case study has been implemented, regarding a system composed by five components that fail at random times with random magnitudes. The system undergoes a failure if during an accidental scenario a safety-relevant monitored signal raises above an upper threshold or decreases below a lower threshold. Truth tables of the two system end-states are used to identify all the minterms. Then, the PIs that best cover all minterms are found by Modified Binary Differential Evolution. Results and performances of the proposed method have been compared with those of a traditional analytical approach known as Quine-McCluskey algorithm and other evolutionary algorithms, such as Genetic Algorithm and Binary Differential Evolution. The capability of the method is confirmed with respect to a dynamic Steam Generator of a Nuclear Power Plant.Di Maio, Francesco; Baronchelli, Samuele; Vagnoli, Matteo; Zio, EnricoDI MAIO, Francesco; Baronchelli, Samuele; Vagnoli, Matteo; Zio, Enric

    Riskitärkeysmitat ja yhteisviat dynaamisessa vuokaaviomallinnuksessa

    Get PDF
    Vikapuuanalyysi on perinteisesti ollut johtava menetelmä monimutkaisten järjestelmien luotettavuusanalyysissä. Vikapuilla ei kuitenkaan aina pystytä kuvaamaan järjestelmien dynaamisia ominaisuuksia riittävän tarkasti. Dynaamista luotettavuusanalyysiä on tutkittu laajasti 1990-luvulta lähtien. Dynaamisia laskentatyökaluja on kehitetty, mutta ne eivät vielä pysty kilpailemaan vikapuuanalyysityökalujen kanssa ydinvoimaloiden luotettavuusanalyysissä. Dynaaminen vuokaaviomallintaminen (dynamic flowgraph modelling, DFM) on menetelmä dynaamisten järjestelmien luotettavuuden analysointiin. DFM-mallit ovat suunnattuja graafeja, joiden solmuilla on äärellinen määrä tiloja. Systeemin dynamiikka kuvataan diskreetteinä tilasiirtyminä. Dynaamisen vuokaaviomallinnuksen, kuten vikapuuanalyysinkin, keskeisenä tavoitteena on selvittää järjestelmävikaan johtavat perimmäiset syyt. VTT on kehittänyt YADRAT-nimistä DFM-pohjaista luotettavuusanalyysityökalua vuodesta 2009. DFMmalleja on aiemmin analysoitu muodostamalla niistä vikapuita, joista järjestelmävikaan johtavat syyt on voitu tunnistaa. YADRAT:ssa systeemiä kuvaavasta mallista muodostetaan sen sijaan binäärinen päätöspuu. Riskitärkeysmitat ja yhteisviat ovat merkittävä osa luotettavuusteoriaa ja vikapuuanalyysiä, mutta niitä ei ole ennen juurikaan tutkittu dynaamisen vuokaaviomallinnuksen yhteydessä. Riskitärkeysmitat mittaavat, kuinka tärkeitä eri komponentit ovat järjestelmän luotettavuuden kannalta. Tässä diplomityössä on muodostettu kahteen perinteiseen riskitärkeysmittaan perustuvat dynaamiset riskitärkeysmitat, jotka ottavat huomioon DFM-mallien monitilaisen logiikan ja dynaamisen luonteen. Dynaamiset riskitärkeysmitat voidaan laskea erikseen komponenttien eri vikatiloille. Näin saadaan tarkempaa tietoa eri komponenttien roolista järjestelmävian synnyssä. Lisäksi työssä on kehitetty dynaamisia yleistyksiä perinteisille parametrisille yhteisvikamalleille. Näissä yhteisvikamalleissa huomioidaan mahdollisuus, että vikatapahtumat voivat tapahtua eri ajanhetkinä yhteisen syyn seurauksena. Dynaamiset riskitärkeysmitat ja yhteisvikamallit on toteutettu YADRAT:ssa.Traditionally, fault tree analysis has been the leading method for reliability analysis of complex systems. However, dynamic properties of systems cannot always be described with adequate accuracy using fault trees. Dynamic reliability analysis has been studied widely since the 1990s. Some dynamic calculation tools have been developed but they cannot compete with fault tree analysis tools yet in reliability analysis of nuclear power plants. Dynamic flowgraph modelling (DFM) is an approach for reliability analysis of dynamic systems. DFM models are directed graphs whose nodes can contain a finite number of states. A system's dynamics is described by discrete state transitions. As in the fault tree analysis, the essential goal of dynamic flowgraph modelling is to identify root causes that lead to a system's failure. VTT has been developing a DFM-based reliability analysis tool called YADRAT since 2009. DFM models have been analysed previously by transforming them into sets of timed fault trees from which the root causes of the system's failure have been identified. In YADRAT, the model that describes a system is transformed into a binary decision diagram. Risk importance measures and common cause failures are a significant part of reliability theory and fault tree analysis but they have not been studied much in relation to dynamic flowgraph modelling. Risk importance measures are used to measure how important different components are with regard to the system's reliability. In this thesis, dynamic risk importance measures based on two traditional risk importance measures are formulated so that they take the multi-valued and dynamic logic of DFM models into account. Dynamic risk importance measures can be calculated separately for different failure states of components so that they provide more detailed information on how different components contribute to the system's failure. In addition, dynamic generalisations are developed for traditional parametric common cause failure models. In these common cause failure models, the possibility that failure events can occur at different time points is considered. Dynamic risk importance measures and common cause failure models are implemented in YADRAT

    Binary decision diagrams for fault tree analysis

    Get PDF
    This thesis develops a new approach to fault tree analysis, namely the Binary Decision Diagram (BDD) method. Conventional qualitative fault tree analysis techniques such as the "top-down" or "bottom-up" approaches are now so well developed that further refinement is unlikely to result in vast improvements in terms of their computational capability. The BDD method has exhibited potential gains to be made in terms of speed and efficiency in determining the minimal cut sets. Further, the nature of the binary decision diagram is such that it is more suited to Boolean manipulation. The BDD method has been programmed and successfully applied to a number of benchmark fault trees. The analysis capabilities of the technique have been extended such that all quantitative fault tree top event parameters, which can be determined by conventional Kinetic Tree Theory, can now be derived directly from the BDD. Parameters such as the top event probability, frequency of occurrence and expected number of occurrences can be calculated exactly using this method, removing the need for the approximations previously required. Thus the BDD method is proven to have advantages in terms of both accuracy and efficiency. Initiator/enabler event analysis and importance measures have been incorporated to extend this method into a full analysis procedure

    Acta Cybernetica : Tomus 3. Fasciculus 4.

    Get PDF
    corecore