8,465 research outputs found

    Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response

    Get PDF
    A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic system model class: a set of input-output probability models for the structure and a prior probability distribution over this set that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of asymptotic approximation or Markov Chain Monte Carlo algorithms

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    A literature review on the use of expert opinion in probabilistic risk analysis

    Get PDF
    Risk assessment is part of the decision making process in many fields of discipline, such as engineering, public health, environment, program management, regulatory policy, and finance. There has been considerable debate over the philosophical and methodological treatment of risk in the past few decades, ranging from its definition and classification to methods of its assessment. Probabilistic risk analysis (PRA) specifically deals with events represented by low probabilities of occurring with high levels of unfavorable consequences. Expert judgment is often a critical source of information in PRA, since empirical data on the variables of interest are rarely available. The author reviews the literature on the use of expert opinion in PRA, in particular on the approaches to eliciting and aggregating experts'assessments. The literature suggests that the methods by which expert opinions are collected and combined have a significant effect on the resulting estimates. The author discusses two types of approaches to eliciting and aggregating expert judgments-behavioral and mathematical approaches, with the emphasis on the latter. It is generally agreed that mathematical approaches tend to yield more accurate estimates than behavioral approaches. After a short description of behavioral approaches, the author discusses mathematical approaches in detail, presenting three aggregation models: non-Bayesian axiomatic models, Bayesian models, andpsychological scaling models. She also discusses issues of stochastic dependence.Health Monitoring&Evaluation,ICT Policy and Strategies,Public Health Promotion,Enterprise Development&Reform,Statistical&Mathematical Sciences,ICT Policy and Strategies,Health Monitoring&Evaluation,Statistical&Mathematical Sciences,Science Education,Scientific Research&Science Parks

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years

    Bayesian inference of nanoparticle-broadened x-ray line profiles

    Full text link
    A single and self-contained method for determining the crystallite-size distribution and shape from experimental x-ray line profile data is presented. We have shown that the crystallite-size distribution can be determined without assuming a functional form for the size distribution, determining instead the size distribution with the least assumptions by applying the Bayesian/MaxEnt method. The Bayesian/MaxEnt method is tested using both simulated and experimental CeO2_{2} data. The results demonstrate that the proposed method can determine size distributions, while making the least number of assumptions. The comparison of the Bayesian/MaxEnt results from experimental CeO2_2 with TEM results is favorableComment: 43 pages, 13 Figures, 5 Table

    Clouds, p-boxes, fuzzy sets, and other uncertainty representations in higher dimensions

    Get PDF
    Uncertainty modeling in real-life applications comprises some serious problems such as the curse of dimensionality and a lack of sufficient amount of statistical data. In this paper we give a survey of methods for uncertainty handling and elaborate the latest progress towards real-life applications with respect to the problems that come with it. We compare different methods and highlight their relationships. We introduce intuitively the concept of potential clouds, our latest approach which successfully copes with both higher dimensions and incomplete information

    Philosophy and the practice of Bayesian statistics

    Full text link
    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.Comment: 36 pages, 5 figures. v2: Fixed typo in caption of figure 1. v3: Further typo fixes. v4: Revised in response to referee
    • …
    corecore