176 research outputs found

    Numerical methods for LĂ©vy processes

    Get PDF
    We survey the use and limitations of some numerical methods for pricing derivative contracts in multidimensional geometric LĂ©vy model

    Stochastic time-changed LĂ©vy processes with their implementation

    Get PDF
    Includes bibliographical references.We focus on the implementation details for Lévy processes and their extension to stochastic volatility models for pricing European vanilla options and exotic options. We calibrated five models to European options on the S&P500 and used the calibrated models to price a cliquet option using Monte Carlo simulation. We provide the algorithms required to value the options when using Lévy processes. We found that these models were able to closely reproduce the market option prices for many strikes and maturities. We also found that the models we studied produced different prices for the cliquet option even though all the models produced the same prices for vanilla options. This highlighted a feature of model uncertainty when valuing a cliquet option. Further research is required to develop tools to understand and manage this model uncertainty. We make a recommendation on how to proceed with this research by studying the cliquet option’s sensitivity to the model parameters

    Stochastic modelling in volatility and its applications in derivatives

    Get PDF
    This thesis consists of three articles concentrating on modelling stochastic volatility in commodity as well as equity and applying stochastic volatility models to evaluate financial derivatives and real options. Firstly, we introduce the general background and the incentive of considering stochastic volatility models. In Chapter 2 we derive tractable analytic solutions for futures and options prices for a linear-quadratic jump-diffusion model with seasonal adjustments in stochastic volatility and convenience yield. We then calibrate our model to data from the fish pool futures market, using the extended Kalman filter and a quasi-maximum likelihood estimator and alternatively using an implied-state quasi-maximum likelihood estimator. We find no statistical evidence of jumps. However, we do find evidence for the positive correlation between salmon spot prices and volatility, seasonality in volatility and convenience yield. In addition we observe a positive relationship between seasonal risk premium and uncertainty within the EU salmon demand. We further show that our model produces option prices that are conform with the observation of implied volatility smiles and skews. In Chapter 3, we introduce a linear quadratic volatility model with co-jumps and show how to calibrate this model to a rich dataset. We apply general method of moments (GMM) and more specifically match the moments of realized power and multi-power variations, which are obtained from high-frequency stock market data. Our model incorporates two salient features: the setting of simultaneous jumps in both return process and volatility process and the superposition structure of a continuous linear quadratic volatility process and a Lévy-driven Ornstein-Uhlenbeck process. We compare the quality of fit for several mod- els, and show that our model outperforms the conventional jump diffusion or Bates model. Besides that, we find evidence that the jump sizes are not normal distributed and that our model performs best when the distribution of jump-sizes is only specified through certain (co-) moment conditions. A Monte Carlo experiments is employed to confirm this. Finally, in Chapter 4, we study the optimal stopping problems in the context of American options with stochastic volatility models and the optimal fish harvesting decision with stochastic convenience yield models, in the presence of drift ambiguity. From the perspective of an ambiguity averse agent, we transfer the problem to the solution of a reflected backward stochastic differential equation (RBSDE) and prove the uniform Lipschitz continuity of the generator. We then propose a numerical algorithm with the theory of RBSDEs and a general stratification technique, and an alternative algorithm without using the theory of RBSDEs. We test the accuracy and convergence of the numerical schemes. By comparing to the one dimensional case, we highlight the importance of the dynamic structure of the agent’s worst case belief. Results also show that the numerical RBSDE algorithm with stratification is more efficient when the optimal generator is attainable

    Implementation of variance reduction techniques applied to the pricing of investment certificates

    Get PDF
    Certificates are structured financial instruments that aim to provide investors with investment solutions tailored to their needs. Certificates can be modeled using a bond component and a derivative component, typically an options strategy. The pricing of certificates is typically performed using the Monte Carlo numerical methodology. Such method allows for projections of the underlying using series of random numbers. The results obtained display an error (standard deviation) that depends on the number of simulations used and on the specific characteristics of the structured product. This work has the objective of minimizing the experimental error, and, consequently, of accelerating the speed of convergence using statistical techniques known in the literature as variance reduction methods. The most popular stochastic dynamics have been analyzed, like the classical Black and Scholes model, the Local Volatility model and the Heston model. Three certificates are analyzed in the paper and they are characterized by different payoffs. The variance reduction techniques, implemented in different programming languages (Python, Matlab and R), are: Latin Hypercube, Stratified Sampling, Antithetic Variables, Importance Sampling, Moment Matching and Control Variates

    Importance sampling for option pricing with feedforward neural networks

    Full text link
    We study the problem of reducing the variance of Monte Carlo estimators through performing suitable changes of the sampling measure which are induced by feedforward neural networks. To this end, building on the concept of vector stochastic integration, we characterize the Cameron-Martin spaces of a large class of Gaussian measures which are induced by vector-valued continuous local martingales with deterministic covariation. We prove that feedforward neural networks enjoy, up to an isometry, the universal approximation property in these topological spaces. We then prove that sampling measures which are generated by feedforward neural networks can approximate the optimal sampling measure arbitrarily well. We conclude with a comprehensive numerical study pricing path-dependent European options for asset price models that incorporate factors such as changing business activity, knock-out barriers, dynamic correlations, and high-dimensional baskets

    Essays in Statistics

    Get PDF
    This thesis is comprised of several contributions to the field of mathematical statistics, particularly with regards to computational issues of Bayesian statistics and functional data analysis. The first two chapters are concerned with computational Bayesian approaches that allow one to generate samples from an approximation to the posterior distribution in settings where the likelihood function of some statistical model of interest is unknown. This has led to a class of Approximate Bayesian Computation (ABC) methods whose performance depends on the ability to effectively summarize the information content of the data sample by a lower-dimensional vector of summary statistics. Ideally, these statistics are sufficient for the parameter of interest. However, it is difficult to establish sufficiency in a straightforward way if the likelihood of the model is unavailable. In Chapter 1 we propose an indirect approach to select sufficient summary statistics for ABC methods that borrows its intuition from the indirect estimation literature in econometrics. More precisely, we introduce an auxiliary statistical model that is large enough as to contain the structural model of interest. Summary statistics are then identified in this auxiliary model and mapped to the structural model of interest. We show sufficiency of these statistics for Indirect ABC methods based on parameter estimates (ABC-IP), likelihood functions (ABC-IL) and scores (ABC-IS) of the auxiliary model. A detailed simulation study investigates the performance of each proposal and compares it to a traditional, moment-based ABC approach. Particularly, the ABC-IL and ABC-IS algorithms are shown to perform better than both standard ABC and the ABC-IP methods. In Chapter 2 we extend the notion of Indirect ABC methods by proposing an efficient way of weighting the individual entries of the vector of summary statistics obtained from the score-based Indirect ABC approach (ABC-IS). In particular, the weighting matrix is given by the inverse of the asymptotic covariance matrix of the score vector of the auxiliary model and allows us to appropriately assess the distance between the true posterior distribution and the approximation based on the ABC-IS method. We illustrate the performance gain in a simulation study. An empirical application then implements the weighted ABC-IS method to the problem of estimating a continuous-time stochastic volatility model based on non-Gaussian Ornstein-Uhlenbeck processes. We show how a suitable auxiliary model can be constructed and confirm estimation results from concurring Bayesian estimation approaches suggested in the literature. In Chapter 3 we consider the problem of sampling from high-dimensional probability distributions that exhibit multiple, well-separated modes. Such distributions arise frequently, for instance, in the Bayesian estimation of macroeconomic DSGE models. Standard Markov Chain Monte Carlo (MCMC) methods, such as the Metropolis-Hastings algorithm, are prone to get trapped in local neighborhoods of the target distribution thus severely limiting the use of these methods in more complex models. We suggest the use of a Sequential Markov Chain Monte Carlo approach to overcome these difficulties and investigate its finite sample properties. The results show that Sequential MCMC methods clearly outperform standard MCMC approaches in a multimodal setting and can recover both the location as well as the mixture weights in a 12-dimensional mixture model. Moreover, we provide a detailed comparison of the effects different choices of tuning parameters have on the approximation to the true sampling distribution. These results can serve as valuable guidelines when applying this method to more complex economic models, such as the (Bayesian) estimation of Dynamic Stochastic General Equilibrium models. Chapters 4 and 5 study the statistical problem of prediction from a functional perspective. In many statistical applications, data is becoming available at ever increasing frequencies and it has thus become natural to think of discrete observations as realizations of a continuous function, say over the course of one day. However, as functions are generally speaking infinite-dimensional objects, the statistical analysis of such functional data is intrinsically different from standard multivariate techniques. In Chapter 4 we consider prediction in functional additive models of first-order autoregressive type for a time series of functional observations. This is a generalization of functional linear models that are commonly considered in the literature and has two advantages to be applied in a functional time series setting. First, it allows us to introduce a very general notion of time dependencies for functional data in this modeling framework. Particularly, it is rooted at the correlation structure of functional principal component scores and even allows for long memory behavior in the score series across the time dimension. Second, prediction in this modeling framework is straightforwardly implemented as it only concerns conditional means of scalar random variables and we suggest a k-nearest neighbors classification scheme. The theoretical contributions of this paper are twofold. In a first step, we verify the applicability of the functional principal components analysis under our notion of time dependence and obtain precise rates of convergence for the mean function and the covariance operator associated with the observed sample of functions. In a second step, we derive precise rates of convergence of the mean squared error for the proposed predictor, taking into account both the effect of truncating the infinite series expansion at some finite integer L as well as the effect of estimating the covariance operator and associated eigenelements based on a sample of N curves. In Chapter 5 we investigate the performance of functional models in a forecasting study of ground-level ozone-concentration surfaces over the geographical domain of Germany. Our perspective thus differs from the literature on spatially distributed functional processes (which are considered to be (univariate) functions of time that show spatial dependence) in that we consider smooth surfaces defined over some spatial domain that are sampled consecutively over time. In particular, we treat discrete observations that are sampled both over a spatial domain and over time as noisy realizations of some time series of smooth bivariate functions. In a first step we therefore discuss how smooth functions can be reconstructed from such noisy measurements through a finite element spline smoother that is defined over some triangulation of the spatial domain. In a second step we consider two forecasting approaches to functional time series. The first one is a functional linear model of first-order auto-regressive type, whereas the second considers the non-parametric extension to functional additive models discussed in Chapter 4. Both approaches are applied to predicting ground-level ozone concentration measured over the spatial domain of Germany and are shown to yield similar predictions

    Stochastic Transport in Upper Ocean Dynamics

    Get PDF
    This open access proceedings volume brings selected, peer-reviewed contributions presented at the Stochastic Transport in Upper Ocean Dynamics (STUOD) 2021 Workshop, held virtually and in person at the Imperial College London, UK, September 20–23, 2021. The STUOD project is supported by an ERC Synergy Grant, and led by Imperial College London, the National Institute for Research in Computer Science and Automatic Control (INRIA) and the French Research Institute for Exploitation of the Sea (IFREMER). The project aims to deliver new capabilities for assessing variability and uncertainty in upper ocean dynamics. It will provide decision makers a means of quantifying the effects of local patterns of sea level rise, heat uptake, carbon storage and change of oxygen content and pH in the ocean. Its multimodal monitoring will enhance the scientific understanding of marine debris transport, tracking of oil spills and accumulation of plastic in the sea. All topics of these proceedings are essential to the scientific foundations of oceanography which has a vital role in climate science. Studies convened in this volume focus on a range of fundamental areas, including: Observations at a high resolution of upper ocean properties such as temperature, salinity, topography, wind, waves and velocity; Large scale numerical simulations; Data-based stochastic equations for upper ocean dynamics that quantify simulation error; Stochastic data assimilation to reduce uncertainty. These fundamental subjects in modern science and technology are urgently required in order to meet the challenges of climate change faced today by human society. This proceedings volume represents a lasting legacy of crucial scientific expertise to help meet this ongoing challenge, for the benefit of academics and professionals in pure and applied mathematics, computational science, data analysis, data assimilation and oceanography

    Inference for stochastic volatility models based on Levy processes

    No full text
    The standard Black-Scholes model is a continuous time model to predict asset movement. For the standard model, the volatility is constant but frequently this model is generalised to allow for stochastic volatility (SV). As the Black-Scholes model is a continuous time model, it is attractive to have a continuous time stochastic volatility model and recently there has been a lot of research into such models. One of the most popular models was proposed by Barndorff-Nielsen and Shephard (2001b) (BNS), where the volatility follows an Ornstein-Uhlenbeck (OU) equation and is driven by a background driving Levy process (BDLP). The correlation in the volatility decays exponentially and so the model is able to explain the volatility clustering present in many financial time series. This model is studied in detail, with assets following the Black-Scholes equation with the BNS SV model. Inference for the BNS SV models is not trivial, particularly when Markov chain Monte Carlo (MCMC) is used. This has been implemented in Roberts et al. (2004) and Griffin and Steel (2003) where a Gamma marginal distribution for the volatility is used. Their focus is on the difficult MCMC implementation and the performance of different proposals, mainly using training data generated from the model itself. In this thesis, the four main new contributions to the Black-Scholes equation with volatility following the BNS SV model are as follows:- (1) We perform the MCMC inference for generalised Inverse Gaussian and Tempered Stable marginal distributions, as well as the special cases, the Gamma, Positive Hyperbolic, Inverse Gamma and Inverse Gaussian distributions. (2) Griffin and Steel (2003) consider the superposition of several BDLPs to give quasi long-memory in the volatility process. This is computationally problematic and so we allow the volatility process to be non-stationary by allowing one of the parameters, which controls the correlation in the volatility process, to vary over time. This allows the correlation of the volatility to be non-stationary and further volatility clustering. (3) The standard Black-Scholes equation is driven by Brownian motion and a generalisation of this allowing for long-memory in the share equation itself (as opposed to the volatility equation), which is based on an approximation to fractional Brownian motion, is considered and implemented. (4) We introduce simulation methods and inference for a new class of continuous time SV models, with a more flexible correlation structure than the BNS SV model. For each of (1), (2) and (3), our focus is on the empirical performance of different models and whether such generalisations improve prediction of future asset movement. The models are tested using daily Foreign Exchange rate and share data for various different countries and companies.Imperial Users onl
    • …
    corecore