10,345 research outputs found

    Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response

    Get PDF
    A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic system model class: a set of input-output probability models for the structure and a prior probability distribution over this set that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of asymptotic approximation or Markov Chain Monte Carlo algorithms

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Why There Can\u27t be a Logic of Induction

    Get PDF
    Carap\u27s attempt to develop an inductive logic has been criticized on a variety of grounds, and while there may be some philosophers who believe that difficulties with Carnap\u27s approach can be overcome by further elaborations and modifications of his system, I think it is fair to say that the consensus is that the approach as a whole cannot succeed. In writing a paper on problems with inductive logic (and with Carnap\u27s approach in particular), I might therefore be accused of beating a dead horse. However, there are still some (e.g., Spirtes, Glymour and Scheines 1993) who seem to believe that purely formal methods for scientific inference can be developed. It may still then be useful to perform an autopsy on a dead horse when establishing the cause of death can shed light on issues of current concern. My intention in this paper is to point out a problem in Carnap\u27s inductive logic which has not been clearly articulated, and which applies generally to any inductive logic. My conclusion will be that scientific inference is inevitably and ineliminably guided by background beliefs and that different background beliefs lead to the application of different inductive rules and different standards of evidentiary relevance. At the end of this paper I will discuss the relationship between this conclusion and the problem of justifying induction
    • …
    corecore