102 research outputs found

    Two adaptive rejection sampling schemes for probability density functions log-convex tails

    Get PDF
    Monte Carlo methods are often necessary for the implementation of optimal Bayesian estimators. A fundamental technique that can be used to generate samples from virtually any target probability distribution is the so-called rejection sampling method, which generates candidate samples from a proposal distribution and then accepts them or not by testing the ratio of the target and proposal densities. The class of adaptive rejection sampling (ARS) algorithms is particularly interesting because they can achieve high acceptance rates. However, the standard ARS method can only be used with log-concave target densities. For this reason, many generalizations have been proposed. In this work, we investigate two different adaptive schemes that can be used to draw exactly from a large family of univariate probability density functions (pdf's), not necessarily log-concave, possibly multimodal and with tails of arbitrary concavity. These techniques are adaptive in the sense that every time a candidate sample is rejected, the acceptance rate is improved. The two proposed algorithms can work properly when the target pdf is multimodal, with first and second derivatives analytically intractable, and when the tails are log-convex in a infinite domain. Therefore, they can be applied in a number of scenarios in which the other generalizations of the standard ARS fail. Two illustrative numerical examples are shown

    Methods for generating variates from probability distributions

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Diverse probabilistic results are used in the design of random univariate generators. General methods based on these are classified and relevant theoretical properties derived. This is followed by a comparative review of specific algorithms currently available for continuous and discrete univariate distributions. A need for a Zeta generator is established, and two new methods, based on inversion and rejection with a truncated Pareto envelope respectively are developed and compared. The paucity of algorithms for multivariate generation motivates a classification of general methods, and in particular, a new method involving envelope rejection with a novel target distribution is proposed. A new method for generating first passage times in a Wiener Process is constructed. This is based on the ratio of two random numbers, and its performance is compared to an existing method for generating inverse Gaussian variates. New "hybrid" algorithms for Poisson and Negative Binomial distributions are constructed, using an Alias implementation, together with a Geometric tail procedure. These are shown to be robust, exact and fast for a wide range of parameter values. Significant modifications are made to Atkinson's Poisson generator (PA), and the resulting algorithm shown to be complementary to the hybrid method. A new method for Von Mises generation via a comparison of random numbers follows, and its performance compared to that of Best and Fisher's Wrapped Cauchy rejection method. Finally new methods are proposed for sampling from distribution tails, using optimally designed Exponential envelopes. Timings are given for Gamma and Normal tails, and in the latter case the performance is shown to be significantly better than Marsaglia's tail generation procedure.Governors of Dundee College of Technolog

    Simulation of asset prices using LĂ©vy processes

    Get PDF
    Includes bibliographical references (leaves 93-97).This dissertation focuses on a LĂ©vy process driven framework for the pricing of financial instruments. The main focus of this dissertation is not, however, to price these instruments; the main focus is simulation based. Simulation is a key issue under Monte Carlo pricing and risk-neutral valuation- it is the first step towards pricing and therefore must be done accurately and with care. This dissertation looks at different kinds of LĂ©vy processes and the various approaches one can take when simulating them

    Complex systems in finance: Monte Carlo evaluation of first passage time density functions

    Get PDF
    Many examples of complex systems are provided by applications in finance and economics areas. Some of intrinsic features of such systems lie with the fact that their parts are interacting in a non-trivial dynamic manner and they can be subject to stochastic forces and jumps. The mathematical models for such systems are often based on stochastic differential equations and efficient computational tools are required to solve them. Here, on an example from the credit risk analysis of multiple correlated firms, we develop a fast Monte-Carlo type procedure for the analysis of complex systems such as those occurring in the financial market. Our procedure is developed by combining the fast Monte-Carlo method for one-dimensional jump-diffusion processes and the generation of correlated multidimensional variates. As we demonstrate on the evaluation of first passage time density functions in credit risk analysis, this allows us to analyze efficiently multivariate and correlated jump-diffusion processes

    From phenomenological modelling of anomalous diffusion through continuous-time random walks and fractional calculus to correlation analysis of complex systems

    Get PDF
    This document contains more than one topic, but they are all connected in ei- ther physical analogy, analytic/numerical resemblance or because one is a building block of another. The topics are anomalous diffusion, modelling of stylised facts based on an empirical random walker diffusion model and null-hypothesis tests in time series data-analysis reusing the same diffusion model. Inbetween these topics are interrupted by an introduction of new methods for fast production of random numbers and matrices of certain types. This interruption constitutes the entire chapter on random numbers that is purely algorithmic and was inspired by the need of fast random numbers of special types. The sequence of chapters is chrono- logically meaningful in the sense that fast random numbers are needed in the first topic dealing with continuous-time random walks (CTRWs) and their connection to fractional diffusion. The contents of the last four chapters were indeed produced in this sequence, but with some temporal overlap. While the fast Monte Carlo solution of the time and space fractional diffusion equation is a nice application that sped-up hugely with our new method we were also interested in CTRWs as a model for certain stylised facts. Without knowing economists [80] reinvented what physicists had subconsciously used for decades already. It is the so called stylised fact for which another word can be empirical truth. A simple example: The diffusion equation gives a probability at a certain time to find a certain diffusive particle in some position or indicates concentration of a dye. It is debatable if probability is physical reality. Most importantly, it does not describe the physical system completely. Instead, the equation describes only a certain expectation value of interest, where it does not matter if it is of grains, prices or people which diffuse away. Reality is coded and “averaged” in the diffusion constant. Interpreting a CTRW as an abstract microscopic particle motion model it can solve the time and space fractional diffusion equation. This type of diffusion equation mimics some types of anomalous diffusion, a name usually given to effects that cannot be explained by classic stochastic models. In particular not by the classic diffusion equation. It was recognised only recently, ca. in the mid 1990s, that the random walk model used here is the abstract particle based counterpart for the macroscopic time- and space-fractional diffusion equation, just like the “classic” random walk with regular jumps ±∆x solves the classic diffusion equation. Both equations can be solved in a Monte Carlo fashion with many realisations of walks. Interpreting the CTRW as a time series model it can serve as a possible null- hypothesis scenario in applications with measurements that behave similarly. It may be necessary to simulate many null-hypothesis realisations of the system to give a (probabilistic) answer to what the “outcome” is under the assumption that the particles, stocks, etc. are not correlated. Another topic is (random) correlation matrices. These are partly built on the previously introduced continuous-time random walks and are important in null- hypothesis testing, data analysis and filtering. The main ob jects encountered in dealing with these matrices are eigenvalues and eigenvectors. The latter are car- ried over to the following topic of mode analysis and application in clustering. The presented properties of correlation matrices of correlated measurements seem to be wasted in contemporary methods of clustering with (dis-)similarity measures from time series. Most applications of spectral clustering ignores information and is not able to distinguish between certain cases. The suggested procedure is sup- posed to identify and separate out clusters by using additional information coded in the eigenvectors. In addition, random matrix theory can also serve to analyse microarray data for the extraction of functional genetic groups and it also suggests an error model. Finally, the last topic on synchronisation analysis of electroen- cephalogram (EEG) data resurrects the eigenvalues and eigenvectors as well as the mode analysis, but this time of matrices made of synchronisation coefficients of neurological activity

    Living on the Edge: An Unified Approach to Antithetic Sampling

    Get PDF
    We identify recurrent ingredients in the antithetic sampling literature leading to a unified sampling framework. We introduce a new class of antithetic schemes that includes the most used antithetic proposals. This perspective enables the derivation of new properties of the sampling schemes: i) optimality in the Kullback--Leibler sense; ii) closed-form multivariate Kendall's τ\tau and Spearman's ρ\rho; iii) ranking in concordance order and iv) a central limit theorem that characterizes stochastic behaviour of Monte Carlo estimators when the sample size tends to infinity. The proposed simulation framework inherits the simplicity of the standard antithetic sampling method, requiring the definition of a set of reference points in the sampling space and the generation of uniform numbers on the segments joining the points. We provide applications to Monte Carlo integration and Markov Chain Monte Carlo Bayesian estimation

    Inference for stochastic volatility models based on Levy processes

    No full text
    The standard Black-Scholes model is a continuous time model to predict asset movement. For the standard model, the volatility is constant but frequently this model is generalised to allow for stochastic volatility (SV). As the Black-Scholes model is a continuous time model, it is attractive to have a continuous time stochastic volatility model and recently there has been a lot of research into such models. One of the most popular models was proposed by Barndorff-Nielsen and Shephard (2001b) (BNS), where the volatility follows an Ornstein-Uhlenbeck (OU) equation and is driven by a background driving Levy process (BDLP). The correlation in the volatility decays exponentially and so the model is able to explain the volatility clustering present in many financial time series. This model is studied in detail, with assets following the Black-Scholes equation with the BNS SV model. Inference for the BNS SV models is not trivial, particularly when Markov chain Monte Carlo (MCMC) is used. This has been implemented in Roberts et al. (2004) and Griffin and Steel (2003) where a Gamma marginal distribution for the volatility is used. Their focus is on the difficult MCMC implementation and the performance of different proposals, mainly using training data generated from the model itself. In this thesis, the four main new contributions to the Black-Scholes equation with volatility following the BNS SV model are as follows:- (1) We perform the MCMC inference for generalised Inverse Gaussian and Tempered Stable marginal distributions, as well as the special cases, the Gamma, Positive Hyperbolic, Inverse Gamma and Inverse Gaussian distributions. (2) Griffin and Steel (2003) consider the superposition of several BDLPs to give quasi long-memory in the volatility process. This is computationally problematic and so we allow the volatility process to be non-stationary by allowing one of the parameters, which controls the correlation in the volatility process, to vary over time. This allows the correlation of the volatility to be non-stationary and further volatility clustering. (3) The standard Black-Scholes equation is driven by Brownian motion and a generalisation of this allowing for long-memory in the share equation itself (as opposed to the volatility equation), which is based on an approximation to fractional Brownian motion, is considered and implemented. (4) We introduce simulation methods and inference for a new class of continuous time SV models, with a more flexible correlation structure than the BNS SV model. For each of (1), (2) and (3), our focus is on the empirical performance of different models and whether such generalisations improve prediction of future asset movement. The models are tested using daily Foreign Exchange rate and share data for various different countries and companies.Imperial Users onl
    • 

    corecore