83 research outputs found

    Importance Sampling for Portfolio Credit Risk in Factor Copula Models

    Get PDF
    This work considers the problem of the estimation of Value at Risk contributions in a portfolio of credits. Each risk contribution is the conditional expected loss of an obligor, given a large loss of the full portfolio. This rare-event framework makes it difficult to obtain accurate and stable estimations via standard Monte Carlo methods. The factor copula models employed to capture the dependence among obligors, poses an additional challenge to this problem. By conveniently modifying the algorithm introduced by Glasserman and Li (2005), this work develops importance sampling schemes which lead to signifivannt variance reduction, both in single and multi-factor models

    Quantile estimation with adaptive importance sampling

    Full text link
    We introduce new quantile estimators with adaptive importance sampling. The adaptive estimators are based on weighted samples that are neither independent nor identically distributed. Using a new law of iterated logarithm for martingales, we prove the convergence of the adaptive quantile estimators for general distributions with nonunique quantiles thereby extending the work of Feldman and Tucker [Ann. Math. Statist. 37 (1996) 451--457]. We illustrate the algorithm with an example from credit portfolio risk analysis.Comment: Published in at http://dx.doi.org/10.1214/09-AOS745 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Reducing Asset Weights' Volatility by Importance Sampling in Stochastic Credit Portfolio Optimization

    Get PDF
    The objective of this paper is to study the effect of importance sampling (IS) techniques on stochastic credit portfolio optimization methods. I introduce a framework that leads to a reduction of volatility of resulting optimal portfolio asset weights. Performance of the method is documented in terms of implementation simplicity and accuracy. It is shown that the incorporated methods make solutions more precise given a limited computer performance by means of a reduced size of the initially necessary optimization model. For a presented example variance reduction of risk measures and asset weights by a factor of at least 350 was achieved. I finally outline how results can be mapped into business practice by utilizing readily available software such as RiskMetrics� CreditManager as basis for constructing a portfolio optimization model that is enhanced by means of IS. Dieser Beitrag soll die Auswirkung der Anwendung von Importance Sampling (IS) Techniken in der stochastischen Kreditportfoliooptimierung aufzeigen. Es wird ein Modellaufbau vorgestellt, der zu einer deutlichen Reduktion der Volatilität der Wertpapieranteilsgewichte führt. Durch eine Darstellung der verhältnismäßig einfachen Berücksichtigung der Importance Sampling Technik im Optimierungsverfahren sowie durch ein empirisches Beispiel wird die Leistungsfähigkeit der Methode dargelegt. In diesem Anwendungsbeispiel kann die Varianz der Schätzer sowohl für die Risikomaße als auch für die optimalen Anteilsgewichte um einen Faktor von mindestens 350 reduziert werden. Es wird somit gezeigt, dass die hier vorgestellte Methode durch eine Reduktion der Größe des ursprünglich notwendigen Optimierungs-problems die Genauigkeit von optimalen Lösungen erhöht, wenn nur eine begrenzte Rechnerleistung zur Verfügung steht. Abschließend wird dargelegt, wie die Lösungsansätze in der Praxis durch eine Ankopplung an existierende Softwarelösungen im Bankbetrieb umgesetzt werden können. Hierzu wird ein Vorgehen skizziert, das auf den Ergebnissen des Programms CreditManager von RiskMetrics ein Portfoliooptimierungsmodell aufbaut. Dieses wird um eine Importance Sampling Technik erweitert.Kreditrisiko ; Stochastische Optimierung; Varianzreduktion ; CVaR; CVaR ; credit risk ; stochastic portfolio optimization ; importance sampling ; CreditMetrics ; CreditManager

    Efficient simulation of large deviation events for sums of random vectors using saddle-point representations

    Get PDF
    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed (i.i.d.), light-tailed and nonlattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in the literature where state-independent, exponential-twisting-based importance sampling has been shown to be asymptotically efficient and a more nuanced state-dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point-based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. These representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance sampling be represented as expectations. Furthermore, it is easy to identify and approximate the zero-variance importance sampling distribution to estimate these integrals. We identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property. To illustrate the broader applicability of the proposed methodology, we extend it to develop an asymptotically vanishing relative error estimator for the practically important expected overshoot of sums of i.i.d. random variables

    Credit Risk Monte Carlos Simulation Using Simplified Creditmetrics' Model: the joint use of importance sampling and descriptive sampling

    Get PDF
    Monte Carlo simulation is implemented in some of the main models for estimating portfolio credit risk, such as CreditMetrics, developed by Gupton, Finger and Bhatia (1997). As in any Monte Carlo application, credit risk simulation according to this model produces imprecise estimates. In order to improve precision, simulation sampling techniques other than traditional Simple Random Sampling become indispensable. Importance Sampling (IS) has already been successfully implemented by Glasserman and Li (2005) on a simplified version of CreditMetrics, in which only default risk is considered. This paper tries to improve even more the precision gains obtained by IS over the same simplified CreditMetrics' model. For this purpose, IS is here combined with Descriptive Sampling (DS), another simulation technique which has proved to be a powerful variance reduction procedure. IS combined with DS was successful in obtaining more precise results for credit risk estimates than its standard form.

    Factor models and the credit risk of a loan portfolio

    Get PDF
    Factor models for portfolio credit risk assume that defaults are independent conditional on a small number of systematic factors. This paper shows that the conditional independence assumption may be violated in one-factor models with constant default thresholds, as conditional defaults become independent only including a set of observable (time-lagged) risk factors. This result is confirmed both when we consider semi-annual default rates and if we focus on small firms. Maximum likelihood estimates for the sensitivity of default rates to systematic risk factors are obtained, showing how they may substantially vary across industry sectors. Finally, individual risk contributions are derived through Monte Carlo simulation.Asset correlation, factor models, loss distribution, portfolio credit risk, risk contributions

    Capital allocation for credit portfolios with kernel estimators

    Full text link
    Determining contributions by sub-portfolios or single exposures to portfolio-wide economic capital for credit risk is an important risk measurement task. Often economic capital is measured as Value-at-Risk (VaR) of the portfolio loss distribution. For many of the credit portfolio risk models used in practice, the VaR contributions then have to be estimated from Monte Carlo samples. In the context of a partly continuous loss distribution (i.e. continuous except for a positive point mass on zero), we investigate how to combine kernel estimation methods with importance sampling to achieve more efficient (i.e. less volatile) estimation of VaR contributions.Comment: 22 pages, 12 tables, 1 figure, some amendment

    Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations

    Get PDF
    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queueing and financial credit risk modelling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. These representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance sampling be represented as expectations. Further, it is easy to identify and approximate the zero-variance importance sampling distribution to estimate these integrals. We identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property. To illustrate the broader applicability of the proposed methodology, we extend it to similarly efficiently estimate the practically important expected overshoot of sums of iid random variables
    corecore