69 research outputs found

    Risk aggregation, dependence structure and diversification benefit

    Get PDF
    Insurance and reinsurance live and die from the diversification benefits or lack of it in their risk portfolio. The new solvency regulations allow companies to include them in their computation of risk-based capital (RBC). The question is how to really evaluate those benefits. To compute the total risk of a portfolio, it is important to establish the rules for aggregating the various risks that compose it. This can only be done through modelling of their dependence. It is a well known fact among traders in financial markets that "diversification works the worst when one needs it the most''. In other words, in times of crisis the dependence between risks increases. Experience has shown that very large loss events almost always affect multiple lines of business simultaneously. September 11, 2001, is an example of such an event: when the claims originated simultaneously from lines of business which are usually uncorrelated, such as property and life, at the same time that the assets of the company were depreciated due to the crisis on the stock markets. In this paper, we explore various methods of modelling dependence and their influence on diversification benefits. We show that the latter strongly depend on the chosen method and that rank correlation grossly overestimates diversification. This has consequences on the RBC for the whole portfolio, which is smaller than it should be when correctly accounting for tail correlation. However, the problem remains to calibrate the dependence for extreme events, which are rare by definition. We analyze and propose possible ways to get out of this dilemma and come up with reasonable estimates.Risk-Based Capital, Hierarchical Copula, Dependence, Calibration

    Bootstrapping the economy -- a non-parametric method of generating consistent future scenarios

    Get PDF
    The fortune and the risk of a business venture depends on the future course of the economy. There is a strong demand for economic forecasts and scenarios that can be applied to planning and modeling. While there is an ongoing debate on modeling economic scenarios, the bootstrapping (or resampling) approach presented here has several advantages. As a non-parametric method, it directly relies on past market behaviors rather than debatable assumptions on models and parameters. Simultaneous dependencies between economic variables are automatically captured. Some aspects of the bootstrapping method require additional modeling: choice and ransformation of the economic variables, arbitrage-free consistency, heavy tails of distributions, serial dependence, trends and mean reversion. Results of a complete economic scenario generator are presented, tested and discussed.economic scenario generator (ESG); asset-liability management (ALM); bootstrapping; resampling; simulation; Monte-Carlo simulation; non-parametric model; yield curve model

    Long-term memories of developed and emerging markets: Using the scaling analysis to characterize their stage of development

    Get PDF
    The scaling properties encompass in a simple analysis many of the volatility characteristics of financial markets. That is why we use them to probe the different degree of markets development. We empirically study the scaling properties of daily Foreign Exchange rates, Stock Market indices and fixed income instruments by using the generalized Hurst approach. We show that the scaling exponents are associated with characteristics of the specific markets and can be used to differentiate markets in their stage of development. The robustness of the results is tested by both Monte-Carlo studies and a computation of the scaling in the frequency-domain.Scaling exponents; Time series analysis; Multi-fractals

    From Default Probabilities To Credit Spreads: Credit Risk Models Do Explain Market Prices

    Get PDF
    Credit risk models like Moody’s KMV are now well established in the market and give bond managers reliable estimates of default probabilities for individual firms. Until now it has been hard to relate those probabilities to the actual credit spreads observed on the market for corporate bonds. Inspired by the existence of scaling laws in financial markets by Dacorogna et al. (2001) and Di Matteo et al. (2005) deviating from the Gaussian behavior, we develop a model that quantitatively links those default probabilities to credit spreads (market prices). The main input quantities to this study are merely industry yield data of different times to maturity and expected default frequencies (EDFs) of Moody’s KMV. The empirical results of this paper clearly indicate that the model can be used to calculate approximate credit spreads (market prices) from EDFs, independent of the time to maturity and the industry sector under consideration. Moreover, the model is effective in an out-of-sample setting, it produces consistent results on the European bond market where data are scarce and can be adequately used to approximate credit spreads on the corporate level.

    Is the gamma risk of options insurable?

    Get PDF
    In this article we analyze the risk associated with hedging written call options. We introduce a way to isolate the gamma risk from other risk types and present its loss distribution, which has heavy tails. Moving to an insurance point of view, we define a loss ratio that we find to be well behaved with a slightly negative correlation to traditional lines of insurance business, offering diversification opportunities. The tails of the loss distribution are shown to be much fatter than those of the underlying stock returns. We also show that badly estimated volatility, in the Black-Scholes model, leads to considerably biased values for the replicating portfolio. Operational risk is defined as caused by imperfect delta hedging and is found to be limited in today's markets where the autocorrelation of stock returns is small.Option; Insurance; Risk

    Approaches and Techniques to Validate Internal Model Results

    Get PDF
    The development of risk model for managing portfolio of financial institutions and insurance companies require both from the regulatory and management points of view a strong validation of the quality of the results provided by internal risk models. In Solvency II for instance, regulators ask for independent validation reports from companies who apply for the approval of their internal models. Unfortunately, the usual statistical techniques do not work for the validation of risk models as we lack enough data to significantly test the results of the models. We will certainly never have enough data to statistically estimate the significance of the VaR at a probability of 1 over 200 years, which is the risk measure required by Solvency II. Instead, we need to develop various strategies to test the reasonableness of the model. In this paper, we review various ways, management and regulators can gain confidence in the quality of models. It all starts by ensuring a good calibration of the risk models and the dependencies between the various risk drivers. Then applying stress tests to the model and various empirical analysis, in particular the probability integral transform, we build a full and credible framework to validate risk models

    Approaches and Techniques to Validate Internal Model Results

    Get PDF
    The development of risk model for managing portfolio of financial institutions and insurance companies require both from the regulatory and management points of view a strong validation of the quality of the results provided by internal risk models. In Solvency II for instance, regulators ask for independent validation reports from companies who apply for the approval of their internal models. Unfortunately, the usual statistical techniques do not work for the validation of risk models as we lack enough data to significantly test the results of the models. We will certainly never have enough data to statistically estimate the significance of the VaR at a probability of 1 over 200 years, which is the risk measure required by Solvency II. Instead, we need to develop various strategies to test the reasonableness of the model. In this paper, we review various ways, management and regulators can gain confidence in the quality of models. It all starts by ensuring a good calibration of the risk models and the dependencies between the various risk drivers. Then applying stress tests to the model and various empirical analysis, in particular the probability integral transform, we build a full and credible framework to validate risk models

    A General framework for modelling mortality to better estimate its relationship with interest rate risks

    Get PDF
    The need for having a good knowledge of the degree of dependence between various risks is fundamental for understanding their real impacts and consequences, since dependence reduces the possibility to diversify the risks. This paper expands in a more theoretical approach the methodology developed in for exploring the dependence between mortality and market risks in case of stress. In particular, we investigate, using the Feller process, the relationship between mortality and interest rate risks. These are the primary sources of risk for life (re)insurance companies. We apply the Feller process to both mortality and interest rate intensities. Our study cover both the short and the long-term interest rates (3m and 10y) as well as the mortality indices of ten developed countries and extending over the same time horizon. Specifically, this paper deals with the stochastic modelling of mortality. We calibrate two different specifications of the Feller process (a two-parameters Feller process and a three-parameters one) to the survival probabilities of the generation of males born in 1940 in ten developed countries. Looking simultaneously at different countries gives us the possibility to find regularities that go beyond one particular case and are general enough to gain more confidence in the results. The calibration provides in most of the cases a very good fit to the data extrapolated from the mortality tables. On the basis of the principle of parsimony, we choose the two-parameters Feller process, namely the hypothesis with the fewer assumptions. These results provide the basis to study the dynamics of both risks and their dependence

    The Price of Being a Systemically Important Financial Institution (SIFI)

    Get PDF
    After reviewing the notion of Systemically Important Financial Institution (SIFI), we propose a first principles way to compute the price of the implicit put option that the State gives to such an institution. Our method is based on important results from Extreme Value Theory (EVT), one for the aggregation of heavy tailed distributions and the other one for the tail behavior of the Value-at-Risk (VaR) versus the Tail-Value-at-Risk (TVaR). We show how to value in practice is proportional to the VaR of the institution and thus would provide the wrong incentive to the banks even if not explicitly granted. We conclude with a proposal to make the institution pay the price of this option to a fund, whose task would be to guarantee the orderly bankruptcy of such an institution. This fund would function like an insurance selling a cover to clients

    A General framework for modelling mortality to better estimate its relationship with interest rate risks

    Get PDF
    The need for having a good knowledge of the degree of dependence between various risks is fundamental for understanding their real impacts and consequences, since dependence reduces the possibility to diversify the risks. This paper expands in a more theoretical approach the methodology developed in for exploring the dependence between mortality and market risks in case of stress. In particular, we investigate, using the Feller process, the relationship between mortality and interest rate risks. These are the primary sources of risk for life (re)insurance companies. We apply the Feller process to both mortality and interest rate intensities. Our study cover both the short and the long-term interest rates (3m and 10y) as well as the mortality indices of ten developed countries and extending over the same time horizon. Specifically, this paper deals with the stochastic modelling of mortality. We calibrate two different specifications of the Feller process (a two-parameters Feller process and a three-parameters one) to the survival probabilities of the generation of males born in 1940 in ten developed countries. Looking simultaneously at different countries gives us the possibility to find regularities that go beyond one particular case and are general enough to gain more confidence in the results. The calibration provides in most of the cases a very good fit to the data extrapolated from the mortality tables. On the basis of the principle of parsimony, we choose the two-parameters Feller process, namely the hypothesis with the fewer assumptions. These results provide the basis to study the dynamics of both risks and their dependence
    corecore