6 research outputs found

    Efficient pricing of barrier options with the variance-gamma model

    No full text
    We develop an efficient Monte Carlo algorithm for pricing barrier options with the variance gamma model [fMAD98a]. After generalizing the double-gamma bridge sampling algorithm of [fAVR03a], we develop conditional bounds on the process paths and exploit these bounds toprice barrier options. The algorithm's efficiency stems from sampling the process paths up to a random resolution that is usually much coarser than the original path resolution. We obtain unbiased estimators, including the case of continuous-time monitoring of the barrier crossing. Our numerical examples show large efficiency gain relative to full-dimensional path sampling

    Importance sampling for multimodal functions and application to pricing exotic options.

    No full text
    We consider importance sampling (IS) to increase the efficiency of Monte Carlo integration, especially for pricing exotic options where the random input is multivariate Normal. When the importance function (the product of integrand and original density) is multimodal, determining a good IS density is a difficult task. We propose an Automated Importance Sampling DEnsity selection procedure (AISDE). AISDE selects an IS density as a mixture of multivariate Normal densities with modes at certain local maxima of the importance function. When the simulation input is multivariate Normal, we use principal component analysis to obtain a reduced-dimension, approximate importance function, which allows efficient identification of a good IS density via AISDE in original problem dimensions over 100. We present Monte Carlo experimental results on randomly generated option-pricing problems (including path-dependent options), demonstrating large and consistent efficiency improvement

    Efficient simulation of gamma and variance-gamma processes

    No full text
    We study algorithms for sampling discrete-time paths of a gamma process and a variance gamma process, defined as a Brownian process with random time change obeying a gamma process. The attractive feature of the algorithms is that increments of the processes over longer time scales are assigned to the first sampling coordinates. The algorithms are based on having in explicit form the process' conditional distributions, are similar in spirit to the Brownian bridge sampling algorithms proposed for financial Monte Carlo, and synergize with quasi-Monte Carlo techniques for efficiency improvement. We compare the variance and efficiency of ordinary Monte Carlo and quasi-Monte Carlo for an example of financial option pricing with the variance-gamma model, taken from fMAD98a

    Learning in revenue management: Exploiting estimation of arrival rate and price response

    No full text
    The paper first studies dynamic pricing to maximize expected revenueof a fixed inventory of a single product under Poisson arrivals of random rate, given a Bayesian prior,and known distribution of reservation prices.For a single unit having a salvage value, we showthere exists a unique revenue-maximizing price, which increases in the salvage value,provided the reservation-price hazard function is increasing.For multiple units, a discrete-time dynamic program is studied.\rr{Empirically, the optimal} price increases in uncertainty, and is sensitive to the prior choice.The paper then considers a seller that knows no parameter values; all he knows isthat sales arise from Poisson arrivals,where a Bernoulli random variable, independent of everything else, converts any arrival into a sale.This can represent any demand function as in \citet{ymGAL94a} and \citet{ymBES09a}, butadditional independence conditions are present here.Observing arrivals and sales at each price during part of the sale horizon,we construct estimators of the arrival rate and purchase probabilities; \rr{we refer to this process as learning}.We derive the bias and mean squared error of the resulting demand-function estimator.Relative to the sale-count-only estimator of \citet{ymBES12a},the summed mean squared error (across all prices) is consistently reduced, empirically.Exploitation methods based on \rr{these estimators} are proposed, where the time spent learning is as \citet{ymBES12a} prescribe.Empirically, the methods' loss against the full-information optimum is competitive to the benchmark of \citet{ymBES12a}.<br/

    Integrated variance reduction strategies for simulation

    No full text
    We develop strategies for integrated use of certain well-known variance reduction techniques to estimate a mean response in a finite-horizon simulation experiment. The building blocks for these integrated variance reduction strategies are the techniques of conditional expectation, correlation induction (including antithetic variates and Latin hypercube sampling), and control variates; all pairings of these techniques are examined. For each integrated strategy, we establish sufficient conditions under which that strategy will yield a smaller response variance than its constituent variance reduction techniques will yield individually. We also provide asymptotic variance comparisons between many of the methods discussed, with emphasis on integrated strategies that incorporate Latin hypercube sampling. An experimental performance evaluation reveals that in the simulation of stochastic activity networks, substantial variance reductions can be achieved with these integrated strategies. Both the theoretical and experimental results indicate that superior performance is obtained via joint application of the techniques of conditional expectation and Latin hypercube sampling

    Control of a batch-processing machine: a computational approach

    No full text
    Batch processing machines, where a number of jobs are processed simultaneously as a batch, occur frequently in semiconductor manufacturing environments, particularly at diffusion in wafer fabrication and at burn-in in final test. In this paper we consider a batch-processing machine subject to uncertain (Poisson) job arrivals. Two different cases are studied: (1) the processing times of batches are independent and identically distributed(IID), corresponding to a diffusion tube; and (2) the processing time of each batch is the maximum of the processing times of its constituent jobs, where the processing times of jobs are IID, modelling a burn-in oven. We develop computational procedures to minimize the expected long-run-average number of jobs in the system under a particular family of control policies. The control policies considered are threshold policies, where processing of a batch is initiated once a certain number of jobs have accumulated in the system. We present numerical examples of our methods and verify their accuracy using simulation
    corecore