844 research outputs found

    Amplifying the Diminished Voice: Designing Space for All

    Get PDF

    A note on conditional covariance matrices for elliptical distributions

    Full text link
    In this short note we provide an analytical formula for the conditional covariance matrices of the elliptically distributed random vectors, when the conditioning is based on the values of any linear combination of the marginal random variables. We show that one could introduce the univariate invariant depending solely on the conditioning set, which greatly simplifies the calculations. As an application, we show that one could define uniquely defined quantile-based sets on which conditional covariance matrices must be equal to each other if only the vector is multivariate normal. The similar results are obtained for conditional correlation matrices of the general elliptic case

    The 20-60-20 Rule

    Full text link
    In this paper we discuss an empirical phenomena known as the 20-60-20 rule. It says that if we split the population into three groups, according to some arbitrary benchmark criterion, then this particular ratio implies some sort of balance. From practical point of view, this feature often leads to efficient management or control. We provide a mathematical illustration, justifying the occurrence of this rule in many real world situations. We show that for any population, which could be described using multivariate normal vector, this fixed ratio leads to a global equilibrium state, when dispersion and linear dependance measurement is considered

    Backtesting Expected Shortfall: a simple recipe?

    Full text link
    We propose a new backtesting framework for Expected Shortfall that could be used by the regulator. Instead of looking at the estimated capital reserve and the realised cash-flow separately, one could bind them into the secured position, for which risk measurement is much easier. Using this simple concept combined with monotonicity of Expected Shortfall with respect to its target confidence level we introduce a natural and efficient backtesting framework. Our test statistics is given by the biggest number of worst realisations for the secured position that add up to a negative total. Surprisingly, this simple quantity could be used to construct an efficient backtesting framework for unconditional coverage of Expected Shortfall in a natural extension of the regulatory traffic-light approach for Value-at-Risk. While being easy to calculate, the test statistic is based on the underlying duality between coherent risk measures and scale-invariant performance measures

    New fat-tail normality test based on conditional second moments with applications to finance

    Get PDF
    In this paper we introduce an efficient fat-tail measurement framework that is based on the conditional second moments. We construct a goodness-of-fit statistic that has a direct interpretation and can be used to assess the impact of fat-tails on central data conditional dispersion. Next, we show how to use this framework to construct a powerful normality test. In particular, we compare our methodology to various popular normality tests, including the Jarque--Bera test that is based on third and fourth moments, and show that in many cases our framework outperforms all others, both on simulated and market stock data. Finally, we derive asymptotic distributions for conditional mean and variance estimators, and use this to show asymptotic normality of the proposed test statistic

    Unbiased estimation of risk

    Full text link
    The estimation of risk measures recently gained a lot of attention, partly because of the backtesting issues of expected shortfall related to elicitability. In this work we shed a new and fundamental light on optimal estimation procedures of risk measures in terms of bias. We show that once the parameters of a model need to be estimated, one has to take additional care when estimating risks. The typical plug-in approach, for example, introduces a bias which leads to a systematic underestimation of risk. In this regard, we introduce a novel notion of unbiasedness to the estimation of risk which is motivated by economic principles. In general, the proposed concept does not coincide with the well-known statistical notion of unbiasedness. We show that an appropriate bias correction is available for many well-known estimators. In particular, we consider value-at-risk and expected shortfall (tail value-at-risk). In the special case of normal distributions, closed-formed solutions for unbiased estimators can be obtained. We present a number of motivating examples which show the outperformance of unbiased estimators in many circumstances. The unbiasedness has a direct impact on backtesting and therefore adds a further viewpoint to established statistical properties

    The least squares method for option pricing revisited

    Full text link
    It is shown that the the popular least squares method of option pricing converges even under very general assumptions. This substantially increases the freedom of creating different implementations of the method, with varying levels of computational complexity and flexible approach to regression. It is also argued that in many practical applications even modest non-linear extensions of standard regression may produce satisfactory results. This claim is illustrated with examples

    Dynamic Limit Growth Indices in Discrete Time

    Full text link
    We propose a new class of mappings, called Dynamic Limit Growth Indices, that are designed to measure the long-run performance of a financial portfolio in discrete time setup. We study various important properties for this new class of measures, and in particular, we provide necessary and sufficient condition for a Dynamic Limit Growth Index to be a dynamic assessment index. We also establish their connection with classical dynamic acceptability indices, and we show how to construct examples of Dynamic Limit Growth Indices using dynamic risk measures and dynamic certainty equivalents. Finally, we propose a new definition of time consistency, suitable for these indices, and we study time consistency for the most notable representative of this class -- the dynamic analog of risk sensitive criterion

    A unified approach to time consistency of dynamic risk measures and dynamic performance measures in discrete time

    Full text link
    In this paper we provide a flexible framework allowing for a unified study of time consistency of risk measures and performance measures (also known as acceptability indices). The proposed framework not only integrates existing forms of time consistency, but also provides a comprehensive toolbox for analysis and synthesis of the concept of time consistency in decision making. In particular, it allows for in depth comparative analysis of (most of) the existing types of time consistency -- a feat that has not be possible before and which is done in the companion paper [BCP2016] to this one. In our approach the time consistency is studied for a large class of maps that are postulated to satisfy only two properties -- monotonicity and locality. The time consistency is defined in terms of an update rule. The form of the update rule introduced here is novel, and is perfectly suited for developing the unifying framework that is worked out in this paper. As an illustration of the applicability of our approach, we show how to recover almost all concepts of weak time consistency by means of constructing appropriate update rules

    Ocena wiarygodności wybranych metod dyskryminacyjnych w ocenie kondycji finansowej przedsiębiorstwa

    Get PDF
    The article aimed to comprehensively assess the predictive possibilities of discriminatory methods used in the study of financial standing of enterprises. The empirical data analysis of 50 enterprises using 10 discriminatory models was carried out to achieve the set goal. The sample of enterprises was created by 25 entities against which liquidation bankruptcy was declared during 2007–2015 and their “healthy” counterparts. The research revealed the reliability of individual models and their usefulness in the study of financial situations by the scientifi community, as well as practitioners who analyse the financial standing of enterprises: potential and existing investors, auditors, members of supervisory boards and experts. Based on the research results, discriminant models were classifid for the last period of the study according to the accuracy of the forecasts. The publication is part of the cycle dealing with the issues of credibility assessment of early-warning methods.Celem artykułu jest kompleksowa ocena możliwości prognostycznych metod dyskryminacyjnych wykorzystywanych w badaniu standingu finansowego przedsiębiorstw. Do osiągnięcia postawionego celu wykorzystano analizę danych empirycznych 50 przedsiębiorstw przy wykorzystaniu 10 modeli dyskryminacyjnych. Próbę przedsiębiorstw tworzyło 25 podmiotów wobec, których ogłoszono upadłość likwidacyjną w latach 2007 – 2015 oraz ich „zdrowych” odpowiedników. Przeprowadzone badania pozwoliły na otrzymanie wiarygodności poszczególnych modeli oraz ich ocenę pod kątem użyteczności wykorzystania w badaniu sytuacji finansowej przez osoby środowiska naukowego, jak również praktyków na co dzień zajmujących się analizą sytuacji finansowej przedsiębiorstw, potencjalnych i dotychczasowych inwestorów, audytorów, członków rady nadzorczej oraz biegłych rewidentów. Na podstawie wyników badań dokonano klasyfikacji modeli dyskryminacyjnych za ostatni okres badania według trafności prognoz
    corecore