5 research outputs found

    Array-RQMC for option pricing under stochastic volatility models

    Full text link
    Array-RQMC has been proposed as a way to effectively apply randomized quasi-Monte Carlo (RQMC) when simulating a Markov chain over a large number of steps to estimate an expected cost or reward. The method can be very effective when the state of the chain has low dimension. For pricing an Asian option under an ordinary geometric Brownian motion model, for example, Array-RQMC reduces the variance by huge factors. In this paper, we show how to apply this method and we study its effectiveness in case the underlying process has stochastic volatility. We show that Array-RQMC can also work very well for these models, even if it requires RQMC points in larger dimension. We examine in particular the variance-gamma, Heston, and Ornstein-Uhlenbeck stochastic volatility models, and we provide numerical results.Comment: 12 pages, 2 figures, 3 table

    Variance Reduction with Array-RQMC for Tau-Leaping Simulation of Stochastic Biological and Chemical Reaction Networks

    Full text link
    We explore the use of Array-RQMC, a randomized quasi-Monte Carlo method designed for the simulation of Markov chains, to reduce the variance when simulating stochastic biological or chemical reaction networks with Ï„\tau-leaping. The task is to estimate the expectation of a function of molecule copy numbers at a given future time TT by the sample average over nn sample paths, and the goal is to reduce the variance of this sample-average estimator. We find that when the method is properly applied, variance reductions by factors in the thousands can be obtained. These factors are much larger than those observed previously by other authors who tried RQMC methods for the same examples. Array-RQMC simulates an array of realizations of the Markov chain and requires a sorting function to reorder these chains according to their states, after each step. The choice of sorting function is a key ingredient for the efficiency of the method, although in our experiments, Array-RQMC was never worse than ordinary Monte Carlo, regardless of the sorting method. The expected number of reactions of each type per step also has an impact on the efficiency gain.Comment: 27 pages, 3 figures, 6 tables We want to thank the anonymous referees who raised very relevant questions that helped us to improve the pape

    Randomized quasi-Monte Carlo methods with applications to quantitative risk management

    Get PDF
    We use randomized quasi-Monte Carlo (RQMC) techniques to construct computational tools for working with normal mixture models, which include automatic integration routines for density and distribution function evaluation, as well as fitting algorithms. We also provide open source software with all our methods implemented. In many practical problems, combining RQMC with importance sampling (IS) gives further variance reduction. However, the optimal IS density is typically not known, nor can it be sampled from. We solve this problem in the setting of single index models by finding a near optimal location-scale transform of the original density that approximates the optimal IS density for the univariate index. Sampling from complicated multivariate models, such as generalized inverse Gaussian mixtures, often involves sampling from a multivariate normal by inversion and from another univariate distribution, say W, whose quantile function is not known nor easily approxi- mated. We explore how we can still use RQMC in this setting and propose several methods when sampling of W is only possible via a black box random variate generator. We also study different ways to feed acceptance rejection (AR) algorithms for W with quasi-random numbers. RQMC methods on triangles have recently been developed by K. Basu and A. Owen. We show that one of the proposed sequences has suboptimal projection properties and address this issue by proposing to use their sequence to construct a stratified sampling scheme. Furthermore, we provide an extensible lattice construction for triangles and perform a simulation study

    Constructions and applications of quasi-random point sets with negative dependence

    Get PDF
    Randomized Quasi-Monte Carlo (RQMC) methods are used as an alternative to the Monte Carlo (MC) method when performing numeric integration by replacing the random point set of MC with a randomized low-discrepancy sequence (LDS). Although RQMC methods have been shown to have better convergence rates than MC, especially for smooth functions, it does not hold in general that the RQMC method has lower variance than the MC method. Using the framework of negative dependence, a quasi-monotone function integrated using an LDS with the property of negative dependence has been shown to have variance no larger than that of the MC estimator. We show by numerical examples how to use the framework of negative dependence to evaluate the quality of various point sets, including Sobol' and Faure sequences. We show, in a similar vein, how scrambled Halton sequences also have a form of negative dependence that is desirable for the purpose of improving upon the MC method for multivariate integration. The scrambling methods with such properties are based on either the nested uniform permutations of Owen or the random linear scrambling of Matousek. The framework of negative dependence is also used to develop new criteria for assessing the quality of generalized Halton sequences, in such a way that they can be analyzed for finite (potentially small) point set sizes and be compared to digital net constructions. Using this type of criteria, parameters for a new generalized Halton sequence are derived. Numerical results are presented to compare different generalized Halton sequences and their randomizations. Applications of these point sets include mapping them onto surfaces that are not the unit hypercube. K. Basu and A. Owen have recently developed RQMC methods on the triangle based on the van der Corput sequence. We improve upon the poor one-dimensional projections of this deterministic triangular van der Corput sequence. Rather than using scrambling directly to address this issue, we show how to modify the triangular van der Corput sequence to construct a stratified sampling scheme. More precisely, we show that nested scrambling is a way to implement an extensible stratified estimator based on a stochastic but balanced allocation. We also perform a numerical study to compare the different constructions

    A Study of Adaptation Mechanisms for Simulation Algorithms

    Get PDF
    The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data
    corecore