29,771 research outputs found

    Sequential Monte Carlo pricing of American-style options under stochastic volatility models

    Full text link
    We introduce a new method to price American-style options on underlying investments governed by stochastic volatility (SV) models. The method does not require the volatility process to be observed. Instead, it exploits the fact that the optimal decision functions in the corresponding dynamic programming problem can be expressed as functions of conditional distributions of volatility, given observed data. By constructing statistics summarizing information about these conditional distributions, one can obtain high quality approximate solutions. Although the required conditional distributions are in general intractable, they can be arbitrarily precisely approximated using sequential Monte Carlo schemes. The drawback, as with many Monte Carlo schemes, is potentially heavy computational demand. We present two variants of the algorithm, one closely related to the well-known least-squares Monte Carlo algorithm of Longstaff and Schwartz [The Review of Financial Studies 14 (2001) 113-147], and the other solving the same problem using a "brute force" gridding approach. We estimate an illustrative SV model using Markov chain Monte Carlo (MCMC) methods for three equities. We also demonstrate the use of our algorithm by estimating the posterior distribution of the market price of volatility risk for each of the three equities.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS286 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The vector floor and ceiling model

    Get PDF
    This paper motivates and develops a nonlinear extension of the Vector Autoregressive model which we call the Vector Floor and Ceiling model. Bayesian and classical methods for estimation and testing are developed and compared in the context of an application involving U.S. macroeconomic data. In terms of statistical significance both classical and Bayesian methods indicate that the (Gaussian) linear model is inadequate. Using impulse response functions we investigate the economic significance of the statistical analysis. We find evidence of strong nonlinearities in the contemporaneous relationships between the variables and milder evidence of nonlinearity in the conditional mean

    Sequential Design for Optimal Stopping Problems

    Full text link
    We propose a new approach to solve optimal stopping problems via simulation. Working within the backward dynamic programming/Snell envelope framework, we augment the methodology of Longstaff-Schwartz that focuses on approximating the stopping strategy. Namely, we introduce adaptive generation of the stochastic grids anchoring the simulated sample paths of the underlying state process. This allows for active learning of the classifiers partitioning the state space into the continuation and stopping regions. To this end, we examine sequential design schemes that adaptively place new design points close to the stopping boundaries. We then discuss dynamic regression algorithms that can implement such recursive estimation and local refinement of the classifiers. The new algorithm is illustrated with a variety of numerical experiments, showing that an order of magnitude savings in terms of design size can be achieved. We also compare with existing benchmarks in the context of pricing multi-dimensional Bermudan options.Comment: 24 page

    Large eddy simulation of a lifted ethylene flame using a dynamic nonequilibrium model for subfilter scalar variance and dissipation rate

    No full text
    Accurate prediction of nonpremixed turbulent combustion using large eddy simulation(LES) requires detailed modeling of the mixing between fuel and oxidizer at scales finer than the LES filter resolution. In conserved scalar combustion models, the small scale mixing process is quantified by two parameters, the subfilter scalar variance and the subfilter scalar dissipation rate. The most commonly used models for these quantities assume a local equilibrium exists between production and dissipation of variance. Such an assumption has limited validity in realistic, technically relevant flow configurations. However, nonequilibrium models for variance and dissipation rate typically contain a model coefficient whose optimal value is unknown a priori for a given simulation. Furthermore, conventional dynamic procedures are not useful for estimating the value of this coefficient. In this work, an alternative dynamic procedure based on the transport equation for subfilter scalar variance is presented, along with a robust conditional averaging approach for evaluation of themodel coefficient. This dynamic nonequilibrium modeling approach is used for simulation of a turbulent lifted ethylene flame, previously studied using DNS by Yoo et al. (Proc. Comb. Inst., 2011). The predictions of the new model are compared to those of a static nonequilibrium modeling approach using an assumed model coefficient, as well as those of the equilibrium modeling approach. The equilibrium models are found to systematically underpredict both subfilter scalar variance and dissipation rate. Use of the dynamic procedure is shown to increase the accuracy of the nonequilibrium modeling approach. However, numerical errors that arise as a consequence of grid-based implicit filtering appear to degrade the accuracy of all three modeling options. Thus, while these results confirm the usefulness of the new dynamic model, they also show that the quality of subfilter model predictions depends on several factors extrinsic to the formulation of the subfilter model itself
    corecore