32 research outputs found
Higher order elicitability and Osband's principle
A statistical functional, such as the mean or the median, is called
elicitable if there is a scoring function or loss function such that the
correct forecast of the functional is the unique minimizer of the expected
score. Such scoring functions are called strictly consistent for the
functional. The elicitability of a functional opens the possibility to compare
competing forecasts and to rank them in terms of their realized scores. In this
paper, we explore the notion of elicitability for multi-dimensional functionals
and give both necessary and sufficient conditions for strictly consistent
scoring functions. We cover the case of functionals with elicitable components,
but we also show that one-dimensional functionals that are not elicitable can
be a component of a higher order elicitable functional. In the case of the
variance this is a known result. However, an important result of this paper is
that spectral risk measures with a spectral measure with finite support are
jointly elicitable if one adds the `correct' quantiles. A direct consequence of
applied interest is that the pair (Value at Risk, Expected Shortfall) is
jointly elicitable under mild conditions that are usually fulfilled in risk
management applications.Comment: 32 page
Joint generalized quantile and conditional tail expectation regression for insurance risk analysis
Based on recent developments in joint regression models for quantile and expected shortfall, this paper seeks to develop models to analyse the risk in the right tail of the distribution of non-negative dependent random variables. We propose an algorithm to estimate conditional tail expectation regressions, introducing generalized risk regression models with link functions that are similar to those in generalized linear models. To preserve the natural ordering of risk measures conditional on a set of covariates, we add extra non-negative terms to the quantile regression. A case using telematics data in motor insurance illustrates the practical implementation of predictive risk models and their potential usefulness in actuarial analysis
Recommended from our members
Measuring the Tail Risk: An Asymptotic Approach
The risk exposure of a business line could be perceived in many ways and is sensitive to the exercise that is performed. One way is to understand the effect of some common/reference risk over the performance of the business line in question, but irrespective of the modelling exercise, the exposure is evaluated under the presence of some suitable adverse scenarios. That is, measuring the tail risk is the main aim. We choose to evaluate the performance via an expectation, which is the most acceptable risk measure amongst academics, practitioners and regulators. In contrast to the common practice where the extreme region is chosen such that only the common/reference risk is explicitly allowed to be large, we assume in this paper an extreme region where both the business line in question and common/reference risks are explicitly allowed to be large. The advantage of this tail risk measure is that the asymptotic approximations are meaningful in all cases, especially in the asymptotic independence case, which helps in understanding the risk exposure in any possible setting. Our numerical examples illustrate these findings and provide a discussion about the sensitivity analysis of our approximations, which is a standard way of checking the importance of parameter estimation of the risk model. The numerical analysis shows strong evidence that our proposed tail risk measure has a lower sensitivity than the standard tail risk measure
Elicitability and backtesting: Perspectives for banking regulation
Conditional forecasts of risk measures play an important role in internal
risk management of financial institutions as well as in regulatory capital
calculations. In order to assess forecasting performance of a risk measurement
procedure, risk measure forecasts are compared to the realized financial losses
over a period of time and a statistical test of correctness of the procedure is
conducted. This process is known as backtesting. Such traditional backtests are
concerned with assessing some optimality property of a set of risk measure
estimates. However, they are not suited to compare different risk estimation
procedures. We investigate the proposal of comparative backtests, which are
better suited for method comparisons on the basis of forecasting accuracy, but
necessitate an elicitable risk measure. We argue that supplementing traditional
backtests with comparative backtests will enhance the existing trading book
regulatory framework for banks by providing the correct incentive for accuracy
of risk measure forecasts. In addition, the comparative backtesting framework
could be used by banks internally as well as by researchers to guide selection
of forecasting methods. The discussion focuses on three risk measures,
Value-at-Risk, expected shortfall and expectiles, and is supported by a
simulation study and data analysis
Quasi-convexity in mixtures for generalized rank-dependent functions
Quasi-convexity in probabilistic mixtures is a common and useful property in
decision analysis. We study a general class of non-monotone mappings, called
the generalized rank-dependent functions, which include the preference models
of expected utilities, dual utilities, and rank-dependent utilities as special
cases, as well as signed Choquet integrals used in risk management. As one of
our main results, quasi-convex (in mixtures) signed Choquet integrals precisely
include two parts: those that are convex (in mixtures) and the class of scaled
quantile-spread mixtures, and this result leads to a full characterization of
quasi-convexity for generalized rank-dependent functions. Seven equivalent
conditions for quasi-convexity in mixtures are obtained for dual utilities and
signed Choquet integrals. We also illustrate a conflict between convexity in
mixtures and convexity in risk pooling among constant-additive mappings