164 research outputs found
Comparing optimal convergence rate of stochastic mesh and least squares method for Bermudan option pricing
We analyze the stochastic mesh method (SMM) as well as the least squares method (LSM) commonly
used for pricing Bermudan options using the standard two phase methodology. For both the methods, we
determine the decay rate of mean square error of the estimator as a function of the computational budget
allocated to the two phases and ascertain the order of the optimal allocation in these phases. We conclude
that with increasing computational budget, while SMM estimator converges at a slower rate compared to
LSM estimator, it converges to the true option value whereas LSM estimator, with fixed number of basis
functions, usually converges to a biased value
On the rates of convergence of simulation based optimization algorithms for optimal stopping problems
In this paper we study simulation based optimization algorithms for solving
discrete time optimal stopping problems. This type of algorithms became popular
among practioneers working in the area of quantitative finance. Using large
deviation theory for the increments of empirical processes, we derive optimal
convergence rates and show that they can not be improved in general. The rates
derived provide a guide to the choice of the number of simulated paths needed
in optimization step, which is crucial for the good performance of any
simulation based optimization algorithm. Finally, we present a numerical
example of solving optimal stopping problem arising in option pricing that
illustrates our theoretical findings
Pricing path-dependent Bermudan options using Wiener chaos expansion: an embarrassingly parallel approach
In this work, we propose a new policy iteration algorithm for pricing
Bermudan options when the payoff process cannot be written as a function of a
lifted Markov process. Our approach is based on a modification of the
well-known Longstaff Schwartz algorithm, in which we basically replace the
standard least square regression by a Wiener chaos expansion. Not only does it
allow us to deal with a non Markovian setting, but it also breaks the
bottleneck induced by the least square regression as the coefficients of the
chaos expansion are given by scalar products on the L^2 space and can therefore
be approximated by independent Monte Carlo computations. This key feature
enables us to provide an embarrassingly parallel algorithm.Comment: The Journal of Computational Finance, Incisive Media, In pres
Number of paths versus number of basis functions in American option pricing
An American option grants the holder the right to select the time at which to
exercise the option, so pricing an American option entails solving an optimal
stopping problem. Difficulties in applying standard numerical methods to
complex pricing problems have motivated the development of techniques that
combine Monte Carlo simulation with dynamic programming. One class of methods
approximates the option value at each time using a linear combination of basis
functions, and combines Monte Carlo with backward induction to estimate optimal
coefficients in each approximation. We analyze the convergence of such a method
as both the number of basis functions and the number of simulated paths
increase. We get explicit results when the basis functions are polynomials and
the underlying process is either Brownian motion or geometric Brownian motion.
We show that the number of paths required for worst-case convergence grows
exponentially in the degree of the approximating polynomials in the case of
Brownian motion and faster in the case of geometric Brownian motion.Comment: Published at http://dx.doi.org/10.1214/105051604000000846 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
The least squares method for option pricing revisited
It is shown that the the popular least squares method of option pricing
converges even under very general assumptions. This substantially increases the
freedom of creating different implementations of the method, with varying
levels of computational complexity and flexible approach to regression. It is
also argued that in many practical applications even modest non-linear
extensions of standard regression may produce satisfactory results. This claim
is illustrated with examples
- …