9,093 research outputs found
Improved Sequential Stopping Rule for Monte Carlo Simulation
This paper presents an improved result on the negative-binomial Monte Carlo
technique analyzed in a previous paper for the estimation of an unknown
probability p. Specifically, the confidence level associated to a relative
interval [p/\mu_2, p\mu_1], with \mu_1, \mu_2 > 1, is proved to exceed its
asymptotic value for a broader range of intervals than that given in the
referred paper, and for any value of p. This extends the applicability of the
estimator, relaxing the conditions that guarantee a given confidence level.Comment: 2 figures. Paper accepted in IEEE Transactions on Communication
Sequential Monte Carlo pricing of American-style options under stochastic volatility models
We introduce a new method to price American-style options on underlying
investments governed by stochastic volatility (SV) models. The method does not
require the volatility process to be observed. Instead, it exploits the fact
that the optimal decision functions in the corresponding dynamic programming
problem can be expressed as functions of conditional distributions of
volatility, given observed data. By constructing statistics summarizing
information about these conditional distributions, one can obtain high quality
approximate solutions. Although the required conditional distributions are in
general intractable, they can be arbitrarily precisely approximated using
sequential Monte Carlo schemes. The drawback, as with many Monte Carlo schemes,
is potentially heavy computational demand. We present two variants of the
algorithm, one closely related to the well-known least-squares Monte Carlo
algorithm of Longstaff and Schwartz [The Review of Financial Studies 14 (2001)
113-147], and the other solving the same problem using a "brute force" gridding
approach. We estimate an illustrative SV model using Markov chain Monte Carlo
(MCMC) methods for three equities. We also demonstrate the use of our algorithm
by estimating the posterior distribution of the market price of volatility risk
for each of the three equities.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS286 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Bayesian subset simulation
We consider the problem of estimating a probability of failure ,
defined as the volume of the excursion set of a function above a given threshold, under a given
probability measure on . In this article, we combine the popular
subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our
sequential Bayesian approach for the estimation of a probability of failure
(Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it
possible to estimate when the number of evaluations of is very
limited and is very small. The resulting algorithm is called Bayesian
subset simulation (BSS). A key idea, as in the subset simulation algorithm, is
to estimate the probabilities of a sequence of excursion sets of above
intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A
Gaussian process prior on is used to define the sequence of densities
targeted by the SMC algorithm, and drive the selection of evaluation points of
to estimate the intermediate probabilities. Adaptive procedures are
proposed to determine the intermediate thresholds and the number of evaluations
to be carried out at each stage of the algorithm. Numerical experiments
illustrate that BSS achieves significant savings in the number of function
evaluations with respect to other Monte Carlo approaches
Markov Chain Monte Carlo: Can We Trust the Third Significant Figure?
Current reporting of results based on Markov chain Monte Carlo computations
could be improved. In particular, a measure of the accuracy of the resulting
estimates is rarely reported. Thus we have little ability to objectively assess
the quality of the reported estimates. We address this issue in that we discuss
why Monte Carlo standard errors are important, how they can be easily
calculated in Markov chain Monte Carlo and how they can be used to decide when
to stop the simulation. We compare their use to a popular alternative in the
context of two examples.Comment: Published in at http://dx.doi.org/10.1214/08-STS257 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Nonanticipating estimation applied to sequential analysis and changepoint detection
Suppose a process yields independent observations whose distributions belong
to a family parameterized by \theta\in\Theta. When the process is in control,
the observations are i.i.d. with a known parameter value \theta_0. When the
process is out of control, the parameter changes. We apply an idea of Robbins
and Siegmund [Proc. Sixth Berkeley Symp. Math. Statist. Probab. 4 (1972) 37-41]
to construct a class of sequential tests and detection schemes whereby the
unknown post-change parameters are estimated. This approach is especially
useful in situations where the parametric space is intricate and mixture-type
rules are operationally or conceptually difficult to formulate. We exemplify
our approach by applying it to the problem of detecting a change in the shape
parameter of a Gamma distribution, in both a univariate and a multivariate
setting.Comment: Published at http://dx.doi.org/10.1214/009053605000000183 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
New Insights into History Matching via Sequential Monte Carlo
The aim of the history matching method is to locate non-implausible regions
of the parameter space of complex deterministic or stochastic models by
matching model outputs with data. It does this via a series of waves where at
each wave an emulator is fitted to a small number of training samples. An
implausibility measure is defined which takes into account the closeness of
simulated and observed outputs as well as emulator uncertainty. As the waves
progress, the emulator becomes more accurate so that training samples are more
concentrated on promising regions of the space and poorer parts of the space
are rejected with more confidence. Whilst history matching has proved to be
useful, existing implementations are not fully automated and some ad-hoc
choices are made during the process, which involves user intervention and is
time consuming. This occurs especially when the non-implausible region becomes
small and it is difficult to sample this space uniformly to generate new
training points. In this article we develop a sequential Monte Carlo (SMC)
algorithm for implementation which is semi-automated. Our novel SMC approach
reveals that the history matching method yields a non-implausible distribution
that can be multi-modal, highly irregular and very difficult to sample
uniformly. Our SMC approach offers a much more reliable sampling of the
non-implausible space, which requires additional computation compared to other
approaches used in the literature
Patterns of Scalable Bayesian Inference
Datasets are growing not just in size but in complexity, creating a demand
for rich models and quantification of uncertainty. Bayesian methods are an
excellent fit for this demand, but scaling Bayesian inference is a challenge.
In response to this challenge, there has been considerable recent work based on
varying assumptions about model structure, underlying computational resources,
and the importance of asymptotic correctness. As a result, there is a zoo of
ideas with few clear overarching principles.
In this paper, we seek to identify unifying principles, patterns, and
intuitions for scaling Bayesian inference. We review existing work on utilizing
modern computing resources with both MCMC and variational approximation
techniques. From this taxonomy of ideas, we characterize the general principles
that have proven successful for designing scalable inference procedures and
comment on the path forward
- …