46 research outputs found

    Concurrent Credit Portfolio Losses

    Full text link
    We consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure of such portfolio losses, we estimate their empirical pairwise copulas. Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable portfolio loss correlations. Anticipated idiosyncratic effects turn out to be negligible. These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector

    Survival Analysis in LGD Modeling

    Get PDF
    The paper proposes an application of the survival time analysis methodology to estimations of the Loss Given Default (LGD) parameter. The main advantage of the survival analysis approach compared to classical regression methods is that it allows exploiting partial recovery data. The model is also modified in order to improve performance of the appropriate goodness of fit measures. The empirical testing shows that the Cox proportional model applied to LGD modeling performs better than the linear and logistic regressions. In addition a significant improvement is achieved with the modified “pseudo” Cox LGD model.credit risk, recovery rate, loss given default, correlation, regulatory capital

    Credit Risk Meets Random Matrices: Coping with Non-Stationary Asset Correlations

    Full text link
    We review recent progress in modeling credit risk for correlated assets. We start from the Merton model which default events and losses are derived from the asset values at maturity. To estimate the time development of the asset values, the stock prices are used whose correlations have a strong impact on the loss distribution, particularly on its tails. These correlations are non-stationary which also influences the tails. We account for the asset fluctuations by averaging over an ensemble of random matrices that models the truly existing set of measured correlation matrices. As a most welcome side effect, this approach drastically reduces the parameter dependence of the loss distribution, allowing us to obtain very explicit results which show quantitatively that the heavy tails prevail over diversification benefits even for small correlations. We calibrate our random matrix model with market data and show how it is capable of grasping different market situations. Furthermore, we present numerical simulations for concurrent portfolio risks, i.e., for the joint probability densities of losses for two portfolios. For the convenience of the reader, we give an introduction to the Wishart random matrix model.Comment: Review of a new random matrix approach to credit ris

    Multi-Factor Bottom-Up Model for Pricing Credit Derivatives

    Get PDF
    In this note we continue the study of the stress event model, a simple and intuitive dynamic model for credit risky portfolios, proposed by Duffie and Singleton (1999). The model is a bottom-up version of the multi-factor portfolio credit model proposed by Longstaff and Rajan (2008). By a novel identification of independence conditions, we are able to decompose the loss distribution into a series expansion which not only provides a clear picture of the characteristics of the loss distribution but also suggests a fast and accurate approximation for it. Our approach has three important features: (i) it is able to match the standard CDS index tranche prices and the underlying CDS spreads, (ii) the computational speed of the loss distribution is very fast, comparable to that of the Gaussian copula, (iii) the computational cost for additional factors is mild, allowing for more ïŹ‚exibility for calibrations and opening the possibility of studying multi-factor default dependence of a portfolio via a bottom-up approach. We demonstrate the tractability and efficiency of our approach by calibrating it to investment grade CDS index tranches.credit derivatives, CDO, bottom-up approach, multi-name, intensity-based, risk and portfolio.

    Basel II and Operational Risk: Implications for risk measurement and management in the financial sector

    Get PDF
    This paper proposes a methodology to analyze the implications of the Advanced Measurement Approach (AMA) for the assessment of operational risk put forward by the Basel II Accord. The methodology relies on an integrated procedure for the construction of the distribution of aggregate losses, using internal and external loss data. It is illustrated on a 2x2 matrix of two selected business lines and two event types, drawn from a database of 3000 losses obtained from a large European banking institution. For each cell, the method calibrates three truncated distributions functions for the body of internal data, the tail of internal data, and external data. When the dependence structure between aggregate losses and the non-linear adjustment of external data are explicitly taken into account, the regulatory capital computed with the AMA method proves to be substantially lower than with less sophisticated approaches allowed by the Basel II Accord, although the effect is not uniform for all business lines and event types. In a second phase, our models are used to estimate the effects of operational risk management actions on bank profitability, through a measure of RAROC adapted to operational risk. The results suggest that substantial savings can be achieved through active management techniques, although the estimated effect of a reduction of the number, frequency or severity of operational losses crucially depends on the calibration of the aggregate loss distributions.operational risk management, basel II, advanced measurement approach, copulae, external data, EVT, RAROC, cost-benefit analysis.

    Deposit Insurance and Risk Management of the U.S. Banking System: How Much? How Safe? Who Pays?

    Get PDF
    We examine the question of deposit insurance through the lens of risk management by addressing three key issues: 1) how big should the fund be; 2) how should coverage be priced; and 3) who pays in the event of loss. We propose a risk-based premium system that is explicitly based on the loss distribution faced by the FDIC. The loss distribution can be used to determine the appropriate level of fund adequacy and reserving in terms of a stated confidence interval and to identify risk-based pricing options. We explicitly estimate that distribution using two different approaches and find that reserves are sufficient to cover roughly 99.85% of the loss distribution corresponding to about a BBB+ rating. We then identify three risk-sharing alternatives addressing who is responsible for funding losses in different parts of the loss distribution. We show in an example that expected loss based pricing, while appropriately penalizing riskier banks, also penalizes smaller banks. By contrast, unexpected loss contribution based pricing significantly penalizes very large banks because large exposures contribute disproportionately to overall (FDIC) portfolio risk.Deposit insurance pricing, loss distribution, risk-based premiums.

    Fitting the Generalized Pareto Distribution to Commercial Fire Loss Severity: Evidence from Taiwan

    Get PDF
    [[abstract]]This paper focuses on modeling and estimating tail parameters of loss distributions from Taiwanese commercial fire loss severity. Using extreme value theory, we employ the generalized Pareto distribution (GPD) and compare it with standard parametric modeling based on lognormal, exponential, gamma andWeibull distributions. In an empirical study, we determine the thresholds of the GPD using mean excess plots and Hill plots. Kolmogorov–Smirnov and likelihood ratio goodness-of-fit tests are conducted, and value-at-risk and expected shortfall are calculated. We also construct confidence intervals for the estimates using the bootstrap method.[[booktype]]çŽ™æœŹ[[booktype]]é›»ć­ç‰ˆ[[iscallforpapers]]
    corecore