7,740 research outputs found

    Chaoticity for multi-class systems and exchangeability within classes

    Full text link
    Classical results for exchangeable systems of random variables are extended to multi-class systems satisfying a natural partial exchangeability assumption. It is proved that the conditional law of a finite multi-class system, given the value of the vector of the empirical measures of its classes, corresponds to independent uniform orderings of the samples within each class, and that a family of such systems converges in law if and only if the corresponding empirical measure vectors converge in law. As a corollary, convergence within each class to an infinite i.i.d. system implies asymptotic independence between different classes. A result implying the Hewitt-Savage 0-1 Law is also extended.Comment: Third revision, v4. The paper is similar to the second revision v3, with several improvement

    Iterative Updating of Model Error for Bayesian Inversion

    Get PDF
    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.Comment: 39 pages, 9 figure

    Sequential Monte Carlo Methods for Option Pricing

    Full text link
    In the following paper we provide a review and development of sequential Monte Carlo (SMC) methods for option pricing. SMC are a class of Monte Carlo-based algorithms, that are designed to approximate expectations w.r.t a sequence of related probability measures. These approaches have been used, successfully, for a wide class of applications in engineering, statistics, physics and operations research. SMC methods are highly suited to many option pricing problems and sensitivity/Greek calculations due to the nature of the sequential simulation. However, it is seldom the case that such ideas are explicitly used in the option pricing literature. This article provides an up-to date review of SMC methods, which are appropriate for option pricing. In addition, it is illustrated how a number of existing approaches for option pricing can be enhanced via SMC. Specifically, when pricing the arithmetic Asian option w.r.t a complex stochastic volatility model, it is shown that SMC methods provide additional strategies to improve estimation.Comment: 37 Pages, 2 Figure
    • …
    corecore