1,173 research outputs found
Fast Generation of Discrete Random Variables
We describe two methods and provide C programs for generating discrete random variables with functions that are simple and fast, averaging ten times as fast as published methods and more than five times as fast as the fastest of those. We provide general procedures for implementing the two methods, as well as specific procedures for three of the most important discrete distributions: Poisson, binomial and hypergeometric.
A Class of Nonlinear Stochastic Volatility Models and Its Implications on Pricing Currency Options
This paper proposes a class of stochastic volatility (SV) models which offers an alternative to the one introduced in Andersen (1994). The class encompasses all standard SV models that have appeared in the literature, including the well known lognormal model, and allows us to empirically test all standard specifications in a convenient way. We develop a likelihood-based technique for analyzing the class. Daily dollar/pound exchange rate data reject all the standard models and suggest evidence of nonlinear SV. An efficient algorithm is proposed to study the implications of this nonlinear SV on pricing currency options and it is found that the lognormal model overprices options.Box-Cox transformations, Stochastic volatility, MCMC, Exchange rate volatility, Option pricing.
Inferential models: A framework for prior-free posterior probabilistic inference
Posterior probabilistic statistical inference without priors is an important
but so far elusive goal. Fisher's fiducial inference, Dempster-Shafer theory of
belief functions, and Bayesian inference with default priors are attempts to
achieve this goal but, to date, none has given a completely satisfactory
picture. This paper presents a new framework for probabilistic inference, based
on inferential models (IMs), which not only provides data-dependent
probabilistic measures of uncertainty about the unknown parameter, but does so
with an automatic long-run frequency calibration property. The key to this new
approach is the identification of an unobservable auxiliary variable associated
with observable data and unknown parameter, and the prediction of this
auxiliary variable with a random set before conditioning on data. Here we
present a three-step IM construction, and prove a frequency-calibration
property of the IM's belief function under mild conditions. A corresponding
optimality theory is developed, which helps to resolve the non-uniqueness
issue. Several examples are presented to illustrate this new approach.Comment: 29 pages with 3 figures. Main text is the same as the published
version. Appendix B is an addition, not in the published version, that
contains some corrections and extensions of two of the main theorem
bqror: An R package for Bayesian Quantile Regression in Ordinal Models
This article describes an R package bqror that estimates Bayesian quantile
regression for ordinal models introduced in Rahman (2016). The paper classifies
ordinal models into two types and offers computationally efficient, yet simple,
Markov chain Monte Carlo (MCMC) algorithms for estimating ordinal quantile
regression. The generic ordinal model with 3 or more outcomes (labeled ORI
model) is estimated by a combination of Gibbs sampling and Metropolis-Hastings
algorithm. Whereas an ordinal model with exactly 3 outcomes (labeled ORII
model) is estimated using Gibbs sampling only. In line with the Bayesian
literature, we suggest using marginal likelihood for comparing alternative
quantile regression models and explain how to compute the same. The models and
their estimation procedures are illustrated via multiple simulation studies and
implemented in two applications. The article also describes several other
functions contained within the bqror package, which are necessary for
estimation, inference, and assessing model fit.Comment: 21 Pages, 4 figures, 2 Algorithm
Revisiting consistency of a recursive estimator of mixing distributions
Estimation of the mixing distribution under a general mixture model is a very
difficult problem, especially when the mixing distribution is assumed to have a
density. Predictive recursion (PR) is a fast, recursive algorithm for
nonparametric estimation of a mixing distribution/density in general mixture
models. However, the existing PR consistency results make rather strong
assumptions, some of which fail for a class of mixture models relevant for
monotone density estimation, namely, scale mixtures of uniform kernels. In this
paper, we develop new consistency results for PR under weaker conditions. Armed
with this new theory, we prove that PR is consistent for the scale mixture of
uniforms problem, and we show that the corresponding PR mixture density
estimator has very good practical performance compared to several existing
methods for monotone density estimation.Comment: 27 pages, 3 figure
From phenomenological modelling of anomalous diffusion through continuous-time random walks and fractional calculus to correlation analysis of complex systems
This document contains more than one topic, but they are all connected in ei-
ther physical analogy, analytic/numerical resemblance or because one is a building
block of another. The topics are anomalous diffusion, modelling of stylised facts
based on an empirical random walker diffusion model and null-hypothesis tests in
time series data-analysis reusing the same diffusion model. Inbetween these topics
are interrupted by an introduction of new methods for fast production of random
numbers and matrices of certain types. This interruption constitutes the entire
chapter on random numbers that is purely algorithmic and was inspired by the
need of fast random numbers of special types. The sequence of chapters is chrono-
logically meaningful in the sense that fast random numbers are needed in the first
topic dealing with continuous-time random walks (CTRWs) and their connection
to fractional diffusion. The contents of the last four chapters were indeed produced
in this sequence, but with some temporal overlap.
While the fast Monte Carlo solution of the time and space fractional diffusion
equation is a nice application that sped-up hugely with our new method we were
also interested in CTRWs as a model for certain stylised facts. Without knowing
economists [80] reinvented what physicists had subconsciously used for decades
already. It is the so called stylised fact for which another word can be empirical
truth. A simple example: The diffusion equation gives a probability at a certain
time to find a certain diffusive particle in some position or indicates concentration
of a dye. It is debatable if probability is physical reality. Most importantly, it
does not describe the physical system completely. Instead, the equation describes
only a certain expectation value of interest, where it does not matter if it is of
grains, prices or people which diffuse away. Reality is coded and âaveragedâ in the
diffusion constant.
Interpreting a CTRW as an abstract microscopic particle motion model it
can solve the time and space fractional diffusion equation. This type of diffusion
equation mimics some types of anomalous diffusion, a name usually given to effects
that cannot be explained by classic stochastic models. In particular not by the
classic diffusion equation. It was recognised only recently, ca. in the mid 1990s, that
the random walk model used here is the abstract particle based counterpart for the
macroscopic time- and space-fractional diffusion equation, just like the âclassicâ
random walk with regular jumps 屉x solves the classic diffusion equation. Both
equations can be solved in a Monte Carlo fashion with many realisations of walks.
Interpreting the CTRW as a time series model it can serve as a possible null-
hypothesis scenario in applications with measurements that behave similarly. It
may be necessary to simulate many null-hypothesis realisations of the system to
give a (probabilistic) answer to what the âoutcomeâ is under the assumption that
the particles, stocks, etc. are not correlated.
Another topic is (random) correlation matrices. These are partly built on the
previously introduced continuous-time random walks and are important in null-
hypothesis testing, data analysis and filtering. The main ob jects encountered in
dealing with these matrices are eigenvalues and eigenvectors. The latter are car-
ried over to the following topic of mode analysis and application in clustering. The
presented properties of correlation matrices of correlated measurements seem to
be wasted in contemporary methods of clustering with (dis-)similarity measures
from time series. Most applications of spectral clustering ignores information and
is not able to distinguish between certain cases. The suggested procedure is sup-
posed to identify and separate out clusters by using additional information coded
in the eigenvectors. In addition, random matrix theory can also serve to analyse
microarray data for the extraction of functional genetic groups and it also suggests
an error model. Finally, the last topic on synchronisation analysis of electroen-
cephalogram (EEG) data resurrects the eigenvalues and eigenvectors as well as the
mode analysis, but this time of matrices made of synchronisation coefficients of
neurological activity
Stylized Facts and Discrete Stochastic Volatility Models
This paper highlights the ability of the discrete stochastic volatility models to predict some important properties of the data, i.e. leptokurtic distribution of the returns, slowly decaying autocorrelation function of squared returns, the Taylor effect and the asymmetric response of volatility to return shocks. Although, there are many methods proposed for stochastic volatility model estimation, in this paper Markov Chain Monte Carlo techniques were considered. It was found that the existent specifications in the stochastic volatility literature are consistent with the empirical properties of the data. Thus, from this point of view the discrete stochastic volatility models are reliable tools for volatility estimation.discrete stochastic volatility models
- âŠ