37 research outputs found

    A note on conditional covariance matrices for elliptical distributions

    Full text link
    In this short note we provide an analytical formula for the conditional covariance matrices of the elliptically distributed random vectors, when the conditioning is based on the values of any linear combination of the marginal random variables. We show that one could introduce the univariate invariant depending solely on the conditioning set, which greatly simplifies the calculations. As an application, we show that one could define uniquely defined quantile-based sets on which conditional covariance matrices must be equal to each other if only the vector is multivariate normal. The similar results are obtained for conditional correlation matrices of the general elliptic case

    The 20-60-20 Rule

    Full text link
    In this paper we discuss an empirical phenomena known as the 20-60-20 rule. It says that if we split the population into three groups, according to some arbitrary benchmark criterion, then this particular ratio implies some sort of balance. From practical point of view, this feature often leads to efficient management or control. We provide a mathematical illustration, justifying the occurrence of this rule in many real world situations. We show that for any population, which could be described using multivariate normal vector, this fixed ratio leads to a global equilibrium state, when dispersion and linear dependance measurement is considered

    Backtesting Expected Shortfall: a simple recipe?

    Full text link
    We propose a new backtesting framework for Expected Shortfall that could be used by the regulator. Instead of looking at the estimated capital reserve and the realised cash-flow separately, one could bind them into the secured position, for which risk measurement is much easier. Using this simple concept combined with monotonicity of Expected Shortfall with respect to its target confidence level we introduce a natural and efficient backtesting framework. Our test statistics is given by the biggest number of worst realisations for the secured position that add up to a negative total. Surprisingly, this simple quantity could be used to construct an efficient backtesting framework for unconditional coverage of Expected Shortfall in a natural extension of the regulatory traffic-light approach for Value-at-Risk. While being easy to calculate, the test statistic is based on the underlying duality between coherent risk measures and scale-invariant performance measures

    New fat-tail normality test based on conditional second moments with applications to finance

    Get PDF
    In this paper we introduce an efficient fat-tail measurement framework that is based on the conditional second moments. We construct a goodness-of-fit statistic that has a direct interpretation and can be used to assess the impact of fat-tails on central data conditional dispersion. Next, we show how to use this framework to construct a powerful normality test. In particular, we compare our methodology to various popular normality tests, including the Jarque--Bera test that is based on third and fourth moments, and show that in many cases our framework outperforms all others, both on simulated and market stock data. Finally, we derive asymptotic distributions for conditional mean and variance estimators, and use this to show asymptotic normality of the proposed test statistic

    Unbiased estimation of risk

    Full text link
    The estimation of risk measures recently gained a lot of attention, partly because of the backtesting issues of expected shortfall related to elicitability. In this work we shed a new and fundamental light on optimal estimation procedures of risk measures in terms of bias. We show that once the parameters of a model need to be estimated, one has to take additional care when estimating risks. The typical plug-in approach, for example, introduces a bias which leads to a systematic underestimation of risk. In this regard, we introduce a novel notion of unbiasedness to the estimation of risk which is motivated by economic principles. In general, the proposed concept does not coincide with the well-known statistical notion of unbiasedness. We show that an appropriate bias correction is available for many well-known estimators. In particular, we consider value-at-risk and expected shortfall (tail value-at-risk). In the special case of normal distributions, closed-formed solutions for unbiased estimators can be obtained. We present a number of motivating examples which show the outperformance of unbiased estimators in many circumstances. The unbiasedness has a direct impact on backtesting and therefore adds a further viewpoint to established statistical properties

    The least squares method for option pricing revisited

    Full text link
    It is shown that the the popular least squares method of option pricing converges even under very general assumptions. This substantially increases the freedom of creating different implementations of the method, with varying levels of computational complexity and flexible approach to regression. It is also argued that in many practical applications even modest non-linear extensions of standard regression may produce satisfactory results. This claim is illustrated with examples

    Dynamic Limit Growth Indices in Discrete Time

    Full text link
    We propose a new class of mappings, called Dynamic Limit Growth Indices, that are designed to measure the long-run performance of a financial portfolio in discrete time setup. We study various important properties for this new class of measures, and in particular, we provide necessary and sufficient condition for a Dynamic Limit Growth Index to be a dynamic assessment index. We also establish their connection with classical dynamic acceptability indices, and we show how to construct examples of Dynamic Limit Growth Indices using dynamic risk measures and dynamic certainty equivalents. Finally, we propose a new definition of time consistency, suitable for these indices, and we study time consistency for the most notable representative of this class -- the dynamic analog of risk sensitive criterion

    A unified approach to time consistency of dynamic risk measures and dynamic performance measures in discrete time

    Full text link
    In this paper we provide a flexible framework allowing for a unified study of time consistency of risk measures and performance measures (also known as acceptability indices). The proposed framework not only integrates existing forms of time consistency, but also provides a comprehensive toolbox for analysis and synthesis of the concept of time consistency in decision making. In particular, it allows for in depth comparative analysis of (most of) the existing types of time consistency -- a feat that has not be possible before and which is done in the companion paper [BCP2016] to this one. In our approach the time consistency is studied for a large class of maps that are postulated to satisfy only two properties -- monotonicity and locality. The time consistency is defined in terms of an update rule. The form of the update rule introduced here is novel, and is perfectly suited for developing the unifying framework that is worked out in this paper. As an illustration of the applicability of our approach, we show how to recover almost all concepts of weak time consistency by means of constructing appropriate update rules

    A note on Multiplicative Poisson Equation: developments in the span-contraction approach

    Full text link
    In this paper we study the problem of Multiplicative Poisson Equation (MPE) bounded solution existence in the generic discrete-time setting. Assuming mixing and boundedness of the risk-reward function, we investigate what conditions should be imposed on the underlying non-controlled probability kernel or the reward function in order for the MPE bounded solution to always exists. In particular, we consolidate span-norm framework based results and derive an explicit sharp bound that needs to be imposed on the cost function to guarantee the bounded solution existence under mixing. Also, we study the properties which the probability kernel must satisfy to ensure existence of bounded MPE for any generic risk-reward function and characterise process behaviour in the complement of the invariant measure support. Finally, we present numerous examples and stochastic-dominance based arguments that help to better understand the intricacies that emerge when the ergodic risk-neutral mean operator is replaced with ergodic risk-sensitive entropy

    Unbiased estimation and backtesting of risk in the context of heavy tails

    Full text link
    While the estimation of risk is an important question in the daily business of banks and insurances, many existing plug-in estimation procedures suffer from an unnecessary bias. This often leads to the underestimation of risk and negatively impacts backtesting results, especially in small sample cases. In this article we show that the link between estimation bias and backtesting can be traced back to the dual relationship between risk measures and the corresponding performance measures, and discuss this in reference to value-at-risk and expected shortfall frameworks. Motivated by this finding, we propose a new algorithm for bias correction and show how to apply it for generalized Pareto distributions. In particular, we consider value-at-risk and expected shortfall plug-in estimators, and show that the application of our algorithm leads to gain in efficiency when heavy tails exist in the data
    corecore