124 research outputs found

    Resilience Analysis of the IMS based Networks

    Get PDF

    Nonparametric estimation of first passage time distributions in flowgraph models

    Get PDF
    Statistical flowgraphs represent multistate semi-Markov processes using integral transforms of transition time distributions between adjacent states; these are combined algebraically and inverted to derive parametric estimates for first passage time distributions between nonadjacent states. This dissertation extends previous work in the field by developing estimation methods for flowgraphs using empirical transforms based on sample data, with no assumption of specific parametric probability models for transition times. We prove strong convergence of empirical flowgraph results to the exact parametric results; develop alternatives for numerical inversion of empirical transforms and compare them in terms of computational complexity, accuracy, and ability to determine error bounds; discuss (with examples) the difficulties of determining confidence bands for distribution estimates obtained in this way; develop confidence intervals for moment-based quantities such as the mean; and show how methods based on empirical transforms can be modified to accommodate censored data. Several applications of the nonparametric method, based on reliability and survival data, are presented in detail

    Simulation and Economic Analysis of Coal Based Thermal Power Plant: A Critical Literature Review

    Get PDF
    ABSTRACT: Coal based fired power plant is a very complex unit. Today's electric energy is playing an important role in the industria

    Methodology for Assessing Reliability Growth Using Multiple Information Sources

    Get PDF
    The research presented here examines the assessment of the reliability of a system or product utilizing multiple data sources available throughout the different stages of its development. The assessment of the reliability as it changes throughout the development of a system is traditionally referred to as reliability growth, which refers to the discovery and mitigation of failure modes within the system, thereby improving the underlying reliability. Traditional models for assessing reliability growth work with test data from individual test events to assess the system reliability at the current stage of development. These models track or project the reliability of the system as it matures subject to the specific assumptions of the models. The contributions of this research are as follows. A new Bayesian reliability growth assessment technique is introduced for continuous-use systems under general corrective action strategies. The technique differs from those currently in the literature due to the allowance for arbitrary times for corrective actions. It also provides a probabilistic treatment of the various parameters within the model, accounting for the uncertainty present in the assessment. The Bayesian reliability growth assessment model is then extended to include results from operational testing. The approach considers the posterior distribution from the reliability growth assessment of the prior for the operational reliability assessment. The developmental and operational testing environments are not a priori assumed to be equivalent, and the change in environments is accounted for in a probabilistic manner within the model. A Bayesian reliability growth planning model is also presented that takes advantage of the reduced uncertainty in the combined operational assessment. The approach allows for reductions in the amount of demonstration testing necessary for a given level of uncertainty in the assessment, and it can also be used to reduce high design goals that often result from traditional operating characteristic curve applications. The final part of this research involves combining various sources of reliability information to obtain prior distributions on the system reliability. The approach presents a general framework for utilizing information such as component/subsystem testing, historical component reliability data, and physics-based modeling of specific component failure mechanisms

    Towards Accurate Estimation of Error Sensitivity in Computer Systems

    Get PDF
    Fault injection is an increasingly important method for assessing, measuringand observing the system-level impact of hardware and software faults in computer systems. This thesis presents the results of a series of experimental studies in which fault injection was used to investigate the impact of bit-flip errors on program execution. The studies were motivated by the fact that transient hardware faults in microprocessors can cause bit-flip errors that can propagate to the microprocessors instruction set architecture registers and main memory. As the rate of such hardware faults is expected to increase with technology scaling, there is a need to better understand how these errors (known as ‘soft errors’) influence program execution, especially in safety-critical systems.Using ISA-level fault injection, we investigate how five aspects, or factors, influence the error sensitivity of a program. We define error sensitivity as the conditional probability that a bit-flip error in live data in an ISA-register or main-memory word will cause a program to produce silent data corruption (SDC; i.e., an erroneous result). We also consider the estimation of a measure called SDC count, which represents the number of ISA-level bit flips that cause an SDC.The five factors addressed are (a) the inputs processed by a program, (b) the level of compiler optimization, (c) the implementation of the program in the source code, (d) the fault model (single bit flips vs double bit flips) and (e)the fault-injection technique (inject-on-write vs inject-on-read). Our results show that these factors affect the error sensitivity in many ways; some factors strongly impact the error sensitivity or SDC count whereas others show a weaker impact. For example, our experiments show that single bit flips tend to cause SDCs more than double bit flips; compiler optimization positively impacts the SDC count but not necessarily the error sensitivity; the error sensitivity varies between 20% and 50% among the programs we tested; and variations in input affect the error sensitivity significantly for most of the tested programs

    Unavailability assessment of redundant safety instrumented systems subject to process demand

    Get PDF
    Sriramula’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of re-search.Peer reviewedPostprin

    Machine and component residual life estimation through the application of neural networks

    Get PDF
    Analysis of reliability data plays an important role in the maintenance decision making process. The accurate estimation of residual life in components and systems can be a great asset when planning the preventive replacement of components on machines. Artificial intelligence is a field that has rapidly developed over the last twenty years and practical applications have been found in many diverse areas. The use of such methods in the maintenance field have however not yet been fully explored. With the common availability of condition monitoring data, another dimension has been added to the analysis of reliability data. Neural networks allow for explanatory variables to be incorporated into the analysis process. This is expected to improve the quality of predictions when compared to the results achieved through the use of methods that rely solely on failure time data. Neural networks can therefore be seen as an alternative to the various regression models, such as the proportional hazards model, which also incorporate such covariates into the analysis. For the purpose of investigating their applicability to the problem of predicting the residual life of machines and components, neural networks were trained and tested with the data of two different reliability related datasets. The first dataset represents the renewal case where repair leads to complete restoration of the system. A typical maintenance situation was simulated in the laboratory by subjecting a series of similar test pieces to different loading conditions. Measurements were taken at regular intervals during testing with a number of sensors which provided an indication of the test piece’s condition at the time of measurement. The dataset was split into a training set and a test set and a number of neural network variations were trained using the first set. The networks’ ability to generalize was then tested by presenting the data from the test set to each of these networks. The second dataset contained data collected from a group of pumps working in a coal mining environment. This dataset therefore represented an example of the situation encountered with a repaired system. The performance of different neural network variations was subsequently compared through the use of cross-validation. It was proved that in most cases the use of condition monitoring data as network inputs improved the accuracy of the neural networks’ predictions. The average prediction error of the various neural networks under comparison varied between 431 and 841 seconds on the renewal dataset, where test pieces had a characteristic life of 8971 seconds. When optimized the multi-layer perceptron neural networks trained with the Levenberg-Marquardt algorithm and the general regression neural network produced a sum of squares error within 11.1% of each other for the data of the repaired system. This result emphasizes the importance of adjusting parameters, network architecture and training targets for optimal performance The advantage of using neural networks for predicting residual life was clearly illustrated when comparing their performance to the results achieved through the use of the traditional statistical methods. The potential of using neural networks for residual life prediction was therefore illustrated in both cases.Dissertation (MEng (Mechanical Engineering))--University of Pretoria, 2007.Mechanical and Aeronautical EngineeringMEngunrestricte

    A contribution to the evaluation and optimization of networks reliability

    Get PDF
    L’évaluation de la fiabilitĂ© des rĂ©seaux est un problĂšme combinatoire trĂšs complexe qui nĂ©cessite des moyens de calcul trĂšs puissants. Plusieurs mĂ©thodes ont Ă©tĂ© proposĂ©es dans la littĂ©rature pour apporter des solutions. Certaines ont Ă©tĂ© programmĂ©es dont notamment les mĂ©thodes d’énumĂ©ration des ensembles minimaux et la factorisation, et d’autres sont restĂ©es Ă  l’état de simples thĂ©ories. Cette thĂšse traite le cas de l’évaluation et l’optimisation de la fiabilitĂ© des rĂ©seaux. Plusieurs problĂšmes ont Ă©tĂ© abordĂ©s dont notamment la mise au point d’une mĂ©thodologie pour la modĂ©lisation des rĂ©seaux en vue de l’évaluation de leur fiabilitĂ©s. Cette mĂ©thodologie a Ă©tĂ© validĂ©e dans le cadre d’un rĂ©seau de radio communication Ă©tendu implantĂ© rĂ©cemment pour couvrir les besoins de toute la province quĂ©bĂ©coise. Plusieurs algorithmes ont aussi Ă©tĂ© Ă©tablis pour gĂ©nĂ©rer les chemins et les coupes minimales pour un rĂ©seau donnĂ©. La gĂ©nĂ©ration des chemins et des coupes constitue une contribution importante dans le processus d’évaluation et d’optimisation de la fiabilitĂ©. Ces algorithmes ont permis de traiter de maniĂšre rapide et efficace plusieurs rĂ©seaux tests ainsi que le rĂ©seau de radio communication provincial. Ils ont Ă©tĂ© par la suite exploitĂ©s pour Ă©valuer la fiabilitĂ© grĂące Ă  une mĂ©thode basĂ©e sur les diagrammes de dĂ©cision binaire. Plusieurs contributions thĂ©oriques ont aussi permis de mettre en place une solution exacte de la fiabilitĂ© des rĂ©seaux stochastiques imparfaits dans le cadre des mĂ©thodes de factorisation. A partir de cette recherche plusieurs outils ont Ă©tĂ© programmĂ©s pour Ă©valuer et optimiser la fiabilitĂ© des rĂ©seaux. Les rĂ©sultats obtenus montrent clairement un gain significatif en temps d’exĂ©cution et en espace de mĂ©moire utilisĂ© par rapport Ă  beaucoup d’autres implĂ©mentations. Mots-clĂ©s: FiabilitĂ©, rĂ©seaux, optimisation, diagrammes de dĂ©cision binaire, ensembles des chemins et coupes minimales, algorithmes, indicateur de Birnbaum, systĂšmes de radio tĂ©lĂ©communication, programmes.Efficient computation of systems reliability is required in many sensitive networks. Despite the increased efficiency of computers and the proliferation of algorithms, the problem of finding good and quickly solutions in the case of large systems remains open. Recently, efficient computation techniques have been recognized as significant advances to solve the problem during a reasonable period of time. However, they are applicable to a special category of networks and more efforts still necessary to generalize a unified method giving exact solution. Assessing the reliability of networks is a very complex combinatorial problem which requires powerful computing resources. Several methods have been proposed in the literature. Some have been implemented including minimal sets enumeration and factoring methods, and others remained as simple theories. This thesis treats the case of networks reliability evaluation and optimization. Several issues were discussed including the development of a methodology for modeling networks and evaluating their reliabilities. This methodology was validated as part of a radio communication network project. In this work, some algorithms have been developed to generate minimal paths and cuts for a given network. The generation of paths and cuts is an important contribution in the process of networks reliability and optimization. These algorithms have been subsequently used to assess reliability by a method based on binary decision diagrams. Several theoretical contributions have been proposed and helped to establish an exact solution of the stochastic networks reliability in which edges and nodes are subject to failure using factoring decomposition theorem. From this research activity, several tools have been implemented and results clearly show a significant gain in time execution and memory space used by comparison to many other implementations. Key-words: Reliability, Networks, optimization, binary decision diagrams, minimal paths set and cuts set, algorithms, Birnbaum performance index, Networks, radio-telecommunication systems, programs
    • 

    corecore