2,577 research outputs found
A Bayesian approach to calibrating hydrogen flame kinetics using many experiments and parameters
First-principles Markov Chain Monte Carlo sampling is used to investigate
uncertainty quantification and uncertainty propagation in parameters describing
hydrogen kinetics. Specifically, we sample the posterior distribution of
thirty-one parameters focusing on the H2O2 and HO2 reactions resulting from
conditioning on ninety-one experiments. Established literature values are used
for the remaining parameters in the mechanism. The samples are computed using
an affine invariant sampler starting with broad, noninformative priors.
Autocorrelation analysis shows that O(1M) samples are sufficient to obtain a
reasonable sampling of the posterior. The resulting distribution identifies
strong positive and negative correlations and several non-Gaussian
characteristics. Using samples drawn from the posterior, we investigate the
impact of parameter uncertainty on the prediction of two more complex flames: a
2D premixed flame kernel and the ignition of a hydrogen jet issuing into a
heated chamber. The former represents a combustion regime similar to the target
experiments used to calibrate the mechanism and the latter represents a
different combustion regime. For the premixed flame, the net amount of product
after a given time interval has a standard deviation of less than 2% whereas
the standard deviation of the ignition time for the jet is more than 10%. The
samples used for these studies are posted online. These results indicate the
degree to which parameters consistent with the target experiments constrain
predicted behavior in different combustion regimes. This process provides a
framework for both identifying reactions for further study from candidate
mechanisms as well as combining uncertainty quantification and propagation to,
ultimately, tie uncertainty in laboratory flame experiments to uncertainty in
end-use numerical predictions of more complicated scenarios
Do Deep Generative Models Know What They Don't Know?
A neural network deployed in the wild may be asked to make predictions for
inputs that were drawn from a different distribution than that of the training
data. A plethora of work has demonstrated that it is easy to find or synthesize
inputs for which a neural network is highly confident yet wrong. Generative
models are widely viewed to be robust to such mistaken confidence as modeling
the density of the input features can be used to detect novel,
out-of-distribution inputs. In this paper we challenge this assumption. We find
that the density learned by flow-based models, VAEs, and PixelCNNs cannot
distinguish images of common objects such as dogs, trucks, and horses (i.e.
CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher
likelihood to the latter when the model is trained on the former. Moreover, we
find evidence of this phenomenon when pairing several popular image data sets:
FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN.
To investigate this curious behavior, we focus analysis on flow-based
generative models in particular since they are trained and evaluated via the
exact marginal likelihood. We find such behavior persists even when we restrict
the flows to constant-volume transformations. These transformations admit some
theoretical analysis, and we show that the difference in likelihoods can be
explained by the location and variances of the data and the model curvature.
Our results caution against using the density estimates from deep generative
models to identify inputs similar to the training distribution until their
behavior for out-of-distribution inputs is better understood.Comment: ICLR 201
Recommended from our members
A Stochastic Volatility Model With Realized Measures for Option Pricing
Based on the fact that realized measures of volatility are affected by measurement errors, we introduce a new family of discrete-time stochastic volatility models having two measurement equations relating both observed returns and realized measures to the latent conditional variance. A semi-analytical option pricing framework is developed for this class of models. In addition, we provide analytical filtering and smoothing recursions for the basic specification of the model, and an effective MCMC algorithm for its richer variants. The empirical analysis shows the effectiveness of filtering and smoothing realized measures in inflating the latent volatility persistence—the crucial parameter in pricing Standard and Poor’s 500 Index options
- …