5,817 research outputs found
The Jeffreys-Lindley Paradox and Discovery Criteria in High Energy Physics
The Jeffreys-Lindley paradox displays how the use of a p-value (or number of
standard deviations z) in a frequentist hypothesis test can lead to an
inference that is radically different from that of a Bayesian hypothesis test
in the form advocated by Harold Jeffreys in the 1930s and common today. The
setting is the test of a well-specified null hypothesis (such as the Standard
Model of elementary particle physics, possibly with "nuisance parameters")
versus a composite alternative (such as the Standard Model plus a new force of
nature of unknown strength). The p-value, as well as the ratio of the
likelihood under the null hypothesis to the maximized likelihood under the
alternative, can strongly disfavor the null hypothesis, while the Bayesian
posterior probability for the null hypothesis can be arbitrarily large. The
academic statistics literature contains many impassioned comments on this
paradox, yet there is no consensus either on its relevance to scientific
communication or on its correct resolution. The paradox is quite relevant to
frontier research in high energy physics. This paper is an attempt to explain
the situation to both physicists and statisticians, in the hope that further
progress can be made.Comment: v4: Continued editing for clarity. Figure added. v5: Minor fixes to
biblio. Same as published version except for minor copy-edits, Synthese
(2014). v6: fix typos, and restore garbled sentence at beginning of Sec 4 to
v
Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 2: Application to TRACE
Inverse Uncertainty Quantification (UQ) is a process to quantify the
uncertainties in random input parameters while achieving consistency between
code simulations and physical observations. In this paper, we performed inverse
UQ using an improved modular Bayesian approach based on Gaussian Process (GP)
for TRACE physical model parameters using the BWR Full-size Fine-Mesh Bundle
Tests (BFBT) benchmark steady-state void fraction data. The model discrepancy
is described with a GP emulator. Numerical tests have demonstrated that such
treatment of model discrepancy can avoid over-fitting. Furthermore, we
constructed a fast-running and accurate GP emulator to replace TRACE full model
during Markov Chain Monte Carlo (MCMC) sampling. The computational cost was
demonstrated to be reduced by several orders of magnitude.
A sequential approach was also developed for efficient test source allocation
(TSA) for inverse UQ and validation. This sequential TSA methodology first
selects experimental tests for validation that has a full coverage of the test
domain to avoid extrapolation of model discrepancy term when evaluated at input
setting of tests for inverse UQ. Then it selects tests that tend to reside in
the unfilled zones of the test domain for inverse UQ, so that one can extract
the most information for posterior probability distributions of calibration
parameters using only a relatively small number of tests. This research
addresses the "lack of input uncertainty information" issue for TRACE physical
input parameters, which was usually ignored or described using expert opinion
or user self-assessment in previous work. The resulting posterior probability
distributions of TRACE parameters can be used in future uncertainty,
sensitivity and validation studies of TRACE code for nuclear reactor system
design and safety analysis
- …