526 research outputs found
Parametric and Nonparametric Volatility Measurement
Volatility has been one of the most active areas of research in empirical finance and time series econometrics during the past decade. This chapter provides a unified continuous-time, frictionless, no-arbitrage framework for systematically categorizing the various volatility concepts, measurement procedures, and modeling procedures. We define three different volatility concepts: (i) the notional volatility corresponding to the ex-post sample-path return variability over a fixed time interval, (ii) the ex-ante expected volatility over a fixed time interval, and (iii) the instantaneous volatility corresponding to the strength of the volatility process at a point in time. The parametric procedures rely on explicit functional form assumptions regarding the expected and/or instantaneous volatility. In the discrete-time ARCH class of models, the expectations are formulated in terms of directly observable variables, while the discrete- and continuous-time stochastic volatility models involve latent state variable(s). The nonparametric procedures are generally free from such functional form assumptions and hence afford estimates of notional volatility that are flexible yet consistent (as the sampling frequency of the underlying returns increases). The nonparametric procedures include ARCH filters and smoothers designed to measure the volatility over infinitesimally short horizons, as well as the recently-popularized realized volatility measures for (non-trivial) fixed-length time intervals.
Parameter inference for multivariate stochastic processes with jumps
This dissertation addresses various aspects of estimation and inference for multivariate stochastic processes with jumps.
The first chapter develops an unbiased Monte Carlo estimator of the transition density of a multivariate jump-diffusion process. The drift, volatility, jump intensity, and jump magnitude are allowed to be state-dependent and non-affine. The density estimator proposed enables efficient parametric estimation of multivariate jump-diffusion models based on discretely observed data. Under mild conditions, the resulting parameter estimates have the same asymptotic behavior as maximum likelihood estimators as the number of data points grows, even when the sampling frequency of the data is fixed. In a numerical case study of practical relevance, the density and parameter estimators are shown to be highly accurate and computationally efficient.
In the second chapter, I examine continuous-time stochastic volatility models with jumps in returns and volatility in which the parameters governing the jumps are allowed to switch according to a Markov chain. I estimate the parameters and the latent processes using the S&P 500 and Nasdaq indices from 1990 to 2014. The Markov-switching parameters characterize well the periods of market stress, such as those in 1997-1998, 2001 and 2007-2010. Several statistical tests favor the model with Markov-switching jump parameters. These results provide empirical evidence about the state-dependent and time-varying nature of asset price jumps, a feature of asset prices that has recently been documented using high-frequency data.
The third chapter considers applying Markov-switching affine stochastic volatility models with jumps in returns and volatility, where the jump parameters are not regime-switching. The estimation is performed via Markov Chain Monte Carlo methods, allowing to obtain the latent processes induced by the structure of the models. Furthermore, I propose some misspecification tests and develop a Markov-switching test based on the odds ratios. The parameters and the latent processes are estimated using the S&P 500 index from 1970 to 2014. I show that the S&P 500 stochastic volatility exhibits a Markov-switching behavior, and that most of the high volatility regimes coincide with the recessions identified ex-post by the National Bureau of Economic Research
Parametric and Nonparametric Volatility Measurement
Volatility has been one of the most active areas of research in empirical finance and time series econometrics during the past decade. This chapter provides a unified continuous-time, frictionless, no-arbitrage framework for systematically categorizing the various volatility concepts, measurement procedures, and modeling procedures. We define three different volatility concepts: (i) the notional volatility corresponding to the ex-post sample-path return variability over a fixed time interval, (ii) the ex-ante expected volatility over a fixed time interval, and (iii) the instantaneous volatility corresponding to the strength of the volatility process at a point in time. The parametric procedures rely on explicit functional form assumptions regarding the expected and/or instantaneous volatility. In the discrete-time ARCH class of models, the expectations are formulated in terms of directly observable variables, while the discrete- and continuous-time stochastic volatility models involve latent state variable(s). The nonparametric procedures are generally free from such functional form assumptions and hence afford estimates of notional volatility that are flexible yet consistent (as the sampling frequency of the underlying returns increases). The nonparametric procedures include ARCH filters and smoothers designed to measure the volatility over infinitesimally short horizons, as well as the recently-popularized realized volatility measures for (non-trivial) fixed-length time intervals.
Correlation estimation using components of Japanese candlesticks
Using the wick's difference from the classical Japanese candlestick representation of daily open, high, low, close prices brings efficiency when estimating the correlation in a bivariate Brownian motion. An interpretation of the correlation estimator in Rogers and Zhou (2008) in the light of wicks' difference allows us to suggest modifications, which lead to an increased efficiency and robustness against the baseline model. An empirical study on four major financial markets confirms the advantages of the modified estimator.PostprintPeer reviewe
A generic construction for high order approximation schemes of semigroups using random grids
Our aim is to construct high order approximation schemes for general
semigroups of linear operators . In order to do it, we fix a
time horizon and the discretization steps and we suppose that we have at hand some short time approximation
operators such that for some
. Then, we consider random time grids such that for all ,
for some , and
we associate the approximation discrete semigroup Our main result is the following: for any
approximation order , we can construct random grids
and coefficients , with such that % with the expectation concerning the random grids
Besides, and the complexity of the
algorithm is of order , for any order of approximation . The standard
example concerns diffusion processes, using the Euler approximation for~.
In this particular case and under suitable conditions, we are able to gather
the terms in order to produce an estimator of with finite variance.
However, an important feature of our approach is its universality in the sense
that it works for every general semigroup and approximations. Besides,
approximation schemes sharing the same lead to the same random grids
and coefficients . Numerical illustrations are given for
ordinary differential equations, piecewise deterministic Markov processes and
diffusions
Estimation in discretely observed diffusions killed at a threshold
Parameter estimation in diffusion processes from discrete observations up to
a first-hitting time is clearly of practical relevance, but does not seem to
have been studied so far. In neuroscience, many models for the membrane
potential evolution involve the presence of an upper threshold. Data are
modeled as discretely observed diffusions which are killed when the threshold
is reached. Statistical inference is often based on the misspecified likelihood
ignoring the presence of the threshold causing severe bias, e.g. the bias
incurred in the drift parameters of the Ornstein-Uhlenbeck model for biological
relevant parameters can be up to 25-100%. We calculate or approximate the
likelihood function of the killed process. When estimating from a single
trajectory, considerable bias may still be present, and the distribution of the
estimates can be heavily skewed and with a huge variance. Parametric bootstrap
is effective in correcting the bias. Standard asymptotic results do not apply,
but consistency and asymptotic normality may be recovered when multiple
trajectories are observed, if the mean first-passage time through the threshold
is finite. Numerical examples illustrate the results and an experimental data
set of intracellular recordings of the membrane potential of a motoneuron is
analyzed.Comment: 29 pages, 5 figure
- …