71,436 research outputs found
Recommended from our members
Essays on Applying Bayesian Data Analysis to Improve Evidence-based Decision-making in Education
This three-article dissertation aims to apply Bayesian data analysis to improve the methodologies that process effectiveness findings, cost information and subjective judgments with the purpose of providing clear, localized guidance for decision makers in educational resource allocation. The first article shows how to use a Bayesian hierarchical model to capture the uncertainty of the effectiveness-cost ratio. The uncertainty information produced by the model may inform the decision makers of the best- and worst-case scenarios of the program efficiency if it is replicated. The second article introduces Bayesian decision theory to address a subset of methodological barriers that hamper the influence of research on educational decision-making, including how to generalize or extrapolate effectiveness and cost information from the evaluation site(s) to a specific context, how to incorporate information from multiple sources, and how to aggregate multiple consequences of an intervention into one framework. The purpose of this article is to generate evidence of program comparison that applies to a specific school facing a decision problem by incorporating the decision-makers' subjective judgements and modeling their specific preference on multiple consequences. The third article proposes a randomized control trial to detect whether principals and practitioners update their beliefs on the effectiveness and cost of educational programs in the light of uncertainty information and localized evidence. Supplemented by a pilot qualitative study that guides decision makers to work on self-defined decision problems, the pilot testing of the experiment provides some evidence on the plausibility of using an experiment to identify the causal impact of research evidence on decision-making
Bayesian interpolation
Although Bayesian analysis has been in use since Laplace, the Bayesian method of model-comparison has only recently been developed in depth. In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other data modeling problems. Regularizing constants are set by examining their posterior probability distribution. Alternative regularizers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. “Occam's razor” is automatically embodied by this process. The way in which Bayes infers the values of regularizing constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling
Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response
A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued
conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The
fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic
system model class: a set of input-output probability models for the structure and a prior probability distribution over this set
that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic
structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive
analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if
structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness
to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates
weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more
complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of
asymptotic approximation or Markov Chain Monte Carlo algorithms
- …