50,146 research outputs found
Global sensitivity analysis of computer models with functional inputs
Global sensitivity analysis is used to quantify the influence of uncertain
input parameters on the response variability of a numerical model. The common
quantitative methods are applicable to computer codes with scalar input
variables. This paper aims to illustrate different variance-based sensitivity
analysis techniques, based on the so-called Sobol indices, when some input
variables are functional, such as stochastic processes or random spatial
fields. In this work, we focus on large cpu time computer codes which need a
preliminary meta-modeling step before performing the sensitivity analysis. We
propose the use of the joint modeling approach, i.e., modeling simultaneously
the mean and the dispersion of the code outputs using two interlinked
Generalized Linear Models (GLM) or Generalized Additive Models (GAM). The
``mean'' model allows to estimate the sensitivity indices of each scalar input
variables, while the ``dispersion'' model allows to derive the total
sensitivity index of the functional input variables. The proposed approach is
compared to some classical SA methodologies on an analytical function. Lastly,
the proposed methodology is applied to a concrete industrial computer code that
simulates the nuclear fuel irradiation
Approximate Models and Robust Decisions
Decisions based partly or solely on predictions from probabilistic models may
be sensitive to model misspecification. Statisticians are taught from an early
stage that "all models are wrong", but little formal guidance exists on how to
assess the impact of model approximation on decision making, or how to proceed
when optimal actions appear sensitive to model fidelity. This article presents
an overview of recent developments across different disciplines to address
this. We review diagnostic techniques, including graphical approaches and
summary statistics, to help highlight decisions made through minimised expected
loss that are sensitive to model misspecification. We then consider formal
methods for decision making under model misspecification by quantifying
stability of optimal actions to perturbations to the model within a
neighbourhood of model space. This neighbourhood is defined in either one of
two ways. Firstly, in a strong sense via an information (Kullback-Leibler)
divergence around the approximating model. Or using a nonparametric model
extension, again centred at the approximating model, in order to `average out'
over possible misspecifications. This is presented in the context of recent
work in the robust control, macroeconomics and financial mathematics
literature. We adopt a Bayesian approach throughout although the methods are
agnostic to this position
Global Sensitivity Analysis of Stochastic Computer Models with joint metamodels
The global sensitivity analysis method, used to quantify the influence of
uncertain input variables on the response variability of a numerical model, is
applicable to deterministic computer code (for which the same set of input
variables gives always the same output value). This paper proposes a global
sensitivity analysis methodology for stochastic computer code (having a
variability induced by some uncontrollable variables). The framework of the
joint modeling of the mean and dispersion of heteroscedastic data is used. To
deal with the complexity of computer experiment outputs, non parametric joint
models (based on Generalized Additive Models and Gaussian processes) are
discussed. The relevance of these new models is analyzed in terms of the
obtained variance-based sensitivity indices with two case studies. Results show
that the joint modeling approach leads accurate sensitivity index estimations
even when clear heteroscedasticity is present
Global Sensitivity Analysis: An Approach Based on the Contribution to the Sample Mean Plot
The contribution to the sample mean plot, originally proposed by Sinclair (1993), is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionu-
clides through the geosphere from an underground disposal vault containing nuclear waste (OECD 1993) is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling (Ratto et al. 2007).JRC.G.9-Econometrics and statistical support to antifrau
A New Approach to a Global Fit of the CKM Matrix
We report on a global CKM matrix analysis taking into account most recent
experimental and theoretical results. The statistical framework (Rfit)
developed in this paper advocates formal frequentist statistics. Other
approaches, such as Bayesian statistics or the 95% CL scan method are also
discussed. We emphasize the distinction of a model testing and a model
dependent, metrological phase in which the various parameters of the theory are
determined. Measurements and theoretical parameters entering the global fit are
thoroughly discussed, in particular with respect to their theoretical
uncertainties. Graphical results for confidence levels are drawn in various one
and two-dimensional parameter spaces. Numerical results are provided for all
relevant CKM parameterizations, the CKM elements and theoretical input
parameters. Predictions for branching ratios of rare K and B meson decays are
obtained. A simple, predictive SUSY extension of the Standard Model is
discussed.Comment: 66 pages, added figures, corrected typos, no quantitative change
CP Violation and the CKM Matrix: Assessing the Impact of the Asymmetric B Factories
We update the profile of the CKM matrix. The apex (rhobar,etabar) of the
Unitarity Triangle is given by means of a global fit. We propose to include
therein sin2alpha from the CP-violating asymmetries in B0->rho+rho-, using
isospin to discriminate the penguin contribution. The constraint from
epsilon'/epsilon is briefly discussed. We study the impact from the measurement
of the rare decay K+->pi+nunu-bar, and from a future observation of
KL->pi0nunubar. The B system is investigated in detail, beginning with
2beta+gamma and gamma from B0->D(*)+-pi-+ and B+->D(*)0K+. A significant part
of this paper is dedicated to B decays into pipi, Kpi, rhopi and rhorho.
Various phenomenological and theoretical approaches are studied. Within QCD
Factorization we find a remarkable agreement of the pipi and Kpi data with the
other UT constraints. A fit of QCD FA to all pipi and Kpi data leads to precise
predictions of the related observables. We analyze separately the B->Kpi
decays, and in particular the impact of electroweak penguins in response to
recent phenomenological discussions. We find no significant constraint on
electroweak nor hadronic parameters. We do not observe any unambiguous sign of
New Physics, whereas there is some evidence for potentially large rescattering
effects. Finally we use a model-independent description of a large class of New
Physics effects in both BBbar mixing and B decays, namely in the b->d and b->s
gluonic penguin amplitudes, to perform a new numerical analysis. Significant
non-standard corrections cannot be excluded yet, however standard solutions are
favored in most cases.Comment: Final version accepted for publication in EPJ C, updated results and
plots are available at: http://ckmfitter.in2p3.fr or
http://www.slac.stanford.edu/xorg/ckmfitter/ (mirror
Identification of quasi-optimal regions in the design space using surrogate modeling
The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to find optimal performance characteristics of expensive simulations (forward analysis: from input to optimal output). However, often the practitioner knows a priori the desired performance and is interested in finding the associated input parameters (reverse analysis: from desired output to input). A popular method to solve such reverse (inverse) problems is to minimize the error between the simulated performance and the desired goal. However, there might be multiple quasi-optimal solutions to the problem. In this paper, the authors propose a novel method to efficiently solve inverse problems and to sample Quasi-Optimal Regions (QORs) in the input (design) space more densely. The development of this technique, based on the probability of improvement criterion and kriging models, is driven by a real-life problem from bio-mechanics, i.e., determining the elasticity of the (rabbit) tympanic membrane, a membrane that converts acoustic sound wave into vibrations of the middle ear ossicular bones
- …