14,071 research outputs found

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Predictive inference for system reliability after common-cause component failures

    Get PDF
    This paper presents nonparametric predictive inference for system reliability following common-cause failures of components. It is assumed that a single failure event may lead to simultaneous failure of multiple components. Data consist of frequencies of such events involving particular numbers of components. These data are used to predict the number of components that will fail at the next failure event. The effect of failure of one or more components on the system reliability is taken into account through the system׳s survival signature. The predictive performance of the approach, in which uncertainty is quantified using lower and upper probabilities, is analysed with the use of ROC curves. While this approach is presented for a basic scenario of a system consisting of only a single type of components and without consideration of failure behaviour over time, it provides many opportunities for more general modelling and inference, these are briefly discussed together with the related research challenges

    Bayesian nonparametric system reliability using sets of priors

    Get PDF
    An imprecise Bayesian nonparametric approach to system reliability with multiple types of components is developed. This allows modelling partial or imperfect prior knowledge on component failure distributions in a flexible way through bounds on the functioning probability. Given component level test data these bounds are propagated to bounds on the posterior predictive distribution for the functioning probability of a new system containing components exchangeable with those used in testing. The method further enables identification of prior–data conflict at the system level based on component level test data. New results on first-order stochastic dominance for the Beta-Binomial distribution make the technique computationally tractable. Our methodological contributions can be immediately used in applications by reliability practitioners as we provide easy to use software tools

    Nonparametric Predictive Inference for System Reliability

    Get PDF
    This thesis provides a new method for statistical inference on system reliability on the basis of limited information resulting from component testing. This method is called Nonparametric Predictive Inference (NPI). We present NPI for system reliability, in particular NPI for k-out-of-m systems, and for systems that consist of multiple ki-out-of-mi subsystems in series configuration. The algorithm for optimal redundancy allocation, with additional components added to subsystems one at a time is presented. We also illustrate redundancy allocation for the same system in case the costs of additional components differ per subsystem. Then NPI is presented for system reliability in a similar setting, but with all subsystems consisting of the same single type of component. As a further step in the development of NPI for system reliability, where more general system structures can be considered, nonparametric predictive inference for reliability of voting systems with multiple component types is presented. We start with a single voting system with multiple component types, then we extend to a series configuration of voting subsystems with multiple component types. Throughout this thesis we assume information from tests of nt components of type t

    Uncertainty in Engineering

    Get PDF
    This open access book provides an introduction to uncertainty quantification in engineering. Starting with preliminaries on Bayesian statistics and Monte Carlo methods, followed by material on imprecise probabilities, it then focuses on reliability theory and simulation methods for complex systems. The final two chapters discuss various aspects of aerospace engineering, considering stochastic model updating from an imprecise Bayesian perspective, and uncertainty quantification for aerospace flight modelling. Written by experts in the subject, and based on lectures given at the Second Training School of the European Research and Training Network UTOPIAE (Uncertainty Treatment and Optimization in Aerospace Engineering), which took place at Durham University (United Kingdom) from 2 to 6 July 2018, the book offers an essential resource for students as well as scientists and practitioners

    An empirical learning-based validation procedure for simulation workflow

    Full text link
    Simulation workflow is a top-level model for the design and control of simulation process. It connects multiple simulation components with time and interaction restrictions to form a complete simulation system. Before the construction and evaluation of the component models, the validation of upper-layer simulation workflow is of the most importance in a simulation system. However, the methods especially for validating simulation workflow is very limit. Many of the existing validation techniques are domain-dependent with cumbersome questionnaire design and expert scoring. Therefore, this paper present an empirical learning-based validation procedure to implement a semi-automated evaluation for simulation workflow. First, representative features of general simulation workflow and their relations with validation indices are proposed. The calculation process of workflow credibility based on Analytic Hierarchy Process (AHP) is then introduced. In order to make full use of the historical data and implement more efficient validation, four learning algorithms, including back propagation neural network (BPNN), extreme learning machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture model (FIGMN), are introduced for constructing the empirical relation between the workflow credibility and its features. A case study on a landing-process simulation workflow is established to test the feasibility of the proposed procedure. The experimental results also provide some useful overview of the state-of-the-art learning algorithms on the credibility evaluation of simulation models

    Integrating and Ranking Uncertain Scientific Data

    Get PDF
    Mediator-based data integration systems resolve exploratory queries by joining data elements across sources. In the presence of uncertainties, such multiple expansions can quickly lead to spurious connections and incorrect results. The BioRank project investigates formalisms for modeling uncertainty during scientific data integration and for ranking uncertain query results. Our motivating application is protein function prediction. In this paper we show that: (i) explicit modeling of uncertainties as probabilities increases our ability to predict less-known or previously unknown functions (though it does not improve predicting the well-known). This suggests that probabilistic uncertainty models offer utility for scientific knowledge discovery; (ii) small perturbations in the input probabilities tend to produce only minor changes in the quality of our result rankings. This suggests that our methods are robust against slight variations in the way uncertainties are transformed into probabilities; and (iii) several techniques allow us to evaluate our probabilistic rankings efficiently. This suggests that probabilistic query evaluation is not as hard for real-world problems as theory indicates
    corecore