3,683 research outputs found
Using CAViaR models with implied volatility for value-at-risk estimation
This paper proposes VaR estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the marketâs expectation of risk. Forecast combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models, a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residual. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P500 daily returns
Unbiased and Consistent Nested Sampling via Sequential Monte Carlo
We introduce a new class of sequential Monte Carlo methods called Nested
Sampling via Sequential Monte Carlo (NS-SMC), which reframes the Nested
Sampling method of Skilling (2006) in terms of sequential Monte Carlo
techniques. This new framework allows convergence results to be obtained in the
setting when Markov chain Monte Carlo (MCMC) is used to produce new samples. An
additional benefit is that marginal likelihood estimates are unbiased. In
contrast to NS, the analysis of NS-SMC does not require the (unrealistic)
assumption that the simulated samples be independent. As the original NS
algorithm is a special case of NS-SMC, this provides insights as to why NS
seems to produce accurate estimates despite a typical violation of its
assumptions. For applications of NS-SMC, we give advice on tuning MCMC kernels
in an automated manner via a preliminary pilot run, and present a new method
for appropriately choosing the number of MCMC repeats at each iteration.
Finally, a numerical study is conducted where the performance of NS-SMC and
temperature-annealed SMC is compared on several challenging and realistic
problems. MATLAB code for our experiments is made available at
https://github.com/LeahPrice/SMC-NS .Comment: 45 pages, some minor typographical errors fixed since last versio
Rejoinder: Monitoring Networked Applications With Incremental Quantile Estimation
Rejoinder: Monitoring Networked Applications With Incremental Quantile
Estimation [arXiv:0708.0302]Comment: Published at http://dx.doi.org/10.1214/088342306000000592 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles
We present a canonical way to turn any smooth parametric family of
probability distributions on an arbitrary search space into a
continuous-time black-box optimization method on , the
\emph{information-geometric optimization} (IGO) method. Invariance as a design
principle minimizes the number of arbitrary choices. The resulting \emph{IGO
flow} conducts the natural gradient ascent of an adaptive, time-dependent,
quantile-based transformation of the objective function. It makes no
assumptions on the objective function to be optimized.
The IGO method produces explicit IGO algorithms through time discretization.
It naturally recovers versions of known algorithms and offers a systematic way
to derive new ones. The cross-entropy method is recovered in a particular case,
and can be extended into a smoothed, parametrization-independent maximum
likelihood update (IGO-ML). For Gaussian distributions on , IGO
is related to natural evolution strategies (NES) and recovers a version of the
CMA-ES algorithm. For Bernoulli distributions on , we recover the
PBIL algorithm. From restricted Boltzmann machines, we obtain a novel algorithm
for optimization on . All these algorithms are unified under a
single information-geometric optimization framework.
Thanks to its intrinsic formulation, the IGO method achieves invariance under
reparametrization of the search space , under a change of parameters of the
probability distributions, and under increasing transformations of the
objective function.
Theory strongly suggests that IGO algorithms have minimal loss in diversity
during optimization, provided the initial diversity is high. First experiments
using restricted Boltzmann machines confirm this insight. Thus IGO seems to
provide, from information theory, an elegant way to spontaneously explore
several valleys of a fitness landscape in a single run.Comment: Final published versio
On the ClassiïŹcation of Dynamical Data Streams Using Novel âAntiâBayesianâ Techniques
acceptedVersionpublishedVersionNivÄ2 N
Adaptive Probabilistic Forecasting of Electricity (Net-)Load
Electricity load forecasting is a necessary capability for power system
operators and electricity market participants. The proliferation of local
generation, demand response, and electrification of heat and transport are
changing the fundamental drivers of electricity load and increasing the
complexity of load modelling and forecasting. We address this challenge in two
ways. First, our setting is adaptive; our models take into account the most
recent observations available, yielding a forecasting strategy able to
automatically respond to changes in the underlying process. Second, we consider
probabilistic rather than point forecasting; indeed, uncertainty quantification
is required to operate electricity systems efficiently and reliably. Our
methodology relies on the Kalman filter, previously used successfully for
adaptive point load forecasting. The probabilistic forecasts are obtained by
quantile regressions on the residuals of the point forecasting model. We
achieve adaptive quantile regressions using the online gradient descent; we
avoid the choice of the gradient step size considering multiple learning rates
and aggregation of experts. We apply the method to two data sets: the regional
net-load in Great Britain and the demand of seven large cities in the United
States. Adaptive procedures improve forecast performance substantially in both
use cases for both point and probabilistic forecasting
- âŠ