82 research outputs found

    Quantitative Validation: An Overview and Framework for PD Backtesting and Benchmarking.

    Get PDF
    The aim of credit risk models is to identify and quantify future outcomes of a set of risk measurements. In other words, the model's purpose is to provide as good an approximation as possible of what constitutes the true underlying risk relationship between a set of inputs and a target variable. These parameters are used for regulatory capital calculations to determine the capital needed that serves a buffer to protect depositors in adverse economic conditions. In order to manage model risk, financial institutions need to set up validation processes so as to monitor the quality of the models on an ongoing basis. Validation is important to inform all stakeholders (e.g. board of directors, senior management, regulators, investors, borrowers, …) and as such allow them to make better decisions. Validation can be considered from both a quantitative and qualitative point of view. Backtesting and benchmarking are key quantitative validation tools. In backtesting, the predicted risk measurements (PD, LGD, CCF) will be contrasted with observed measurements using a workbench of available test statistics to evaluate the calibration, discrimination and stability of the model. A timely detection of reduced performance is crucial since it directly impacts profitability and risk management strategies. The aim of benchmarking is to compare internal risk measurements with external risk measurements so to allow to better gauge the quality of the internal rating system. This paper will focus on the quantitative PD validation process within a Basel II context. We will set forth a traffic light indicator approach that employs all relevant statistical tests to quantitatively validate the used PD model, and document this complete approach with a reallife case-study.Framework; Benchmarking; Credit; Credit scoring; Control;

    The performance of credit rating systems in the assessment of collateral used in Eurosystem monetary policy operations

    Get PDF
    The aims of this paper are twofold: first, we attempt to express the threshold of a single “A” rating as issued by major international rating agencies in terms of annualised probabilities of default. We use data from Standard & Poor’s and Moody’s publicly available rating histories to construct confidence intervals for the level of probability of default to be associated with the single “A” rating. The focus on the single “A” rating level is not accidental, as this is the credit quality level at which the Eurosystem considers financial assets to be eligible collateral for its monetary policy operations. The second aim is to review various existing validation models for the probability of default which enable the analyst to check the ability of credit assessment systems to forecast future default events. Within this context the paper proposes a simple mechanism for the comparison of the performance of major rating agencies and that of other credit assessment systems, such as the internal ratings-based systems of commercial banks under the Basel II regime. This is done to provide a simple validation yardstick to help in the monitoring of the performance of the different credit assessment systems participating in the assessment of eligible collateral underlying Eurosystem monetary policy operations. Contrary to the widely used confidence interval approach, our proposal, based on an interpretation of p-values as frequencies, guarantees a convergence to an ex ante fixed probability of default (PD) value. Given the general characteristics of the problem considered, we consider this simple mechanism to also be applicable in other contexts.

    Predicting loss given default

    Get PDF
    The topic of credit risk modeling has arguably become more important than ever before given the recent financial turmoil. Conform the international Basel accords on banking supervision, financial institutions need to prove that they hold sufficient capital to protect themselves and the financial system against unforeseen losses caused by defaulters. In order to determine the required minimal capital, empirical models can be used to predict the loss given default (LGD). The main objectives of this doctoral thesis are to obtain new insights in how to develop and validate predictive LGD models through regression techniques. The first part reveals how good real-life LGD can be predicted and which techniques are best. Its value is in particular in the use of default data from six major international financial institutions and the evaluation of twenty-four different regression techniques, making this the largest LGD benchmarking study so far. Nonetheless, it is found that the resulting models have limited predictive performance no matter what technique is employed, although non-linear techniques yield higher performances than traditional linear techniques. The results of this study strongly advocate the need for financial institutions to invest in the collection of more relevant data. The second part introduces a novel validation framework to backtest the predictive performance of LGD models. The proposed key idea is to assess the test performance relative to the performance during model development with statistical hypothesis tests based on commonly used LGD predictive performance metrics. The value of this framework comprises a solution to the lack of reference values to determine acceptable performance and to possible performance bias caused by too little data. This study offers financial institutions a practical tool to prove the validity of their LGD models and corresponding predictions as required by national regulators. The third part uncovers whether the optimal regression technique can be selected based on typical characteristics of the data. Its value is especially in the use of the recently introduced concept of datasetoids which allows the generation of thousands of datasets representing real-life relations, thereby circumventing the scarcity problem of publicly available real-life datasets, making this the largest meta learning regression study so far. It is found that typical data based characteristics do not play any role in the performance of a technique. Nonetheless, it is proven that algorithm based characteristics are good drivers to select the optimal technique. This thesis may be valuable for any financial institution implementing credit risk models to determine their minimal capital requirements compliant with the Basel accords. The new insights provided in this thesis may support financial institutions to develop and validate their own LGD models. The results of the benchmarking and meta learning study can help financial institutions to select the appropriate regression technique to model their LGD portfolio's. In addition, the proposed backtesting framework, together with the benchmarking results can be employed to support the validation of the internally developed LGD models

    The performance of credit rating systems in the assessment of collateral used in Eurosystem monetary policy operations

    Get PDF
    The aims of this paper are twofold: first, we attempt to express the threshold of a single “A” rating as issued by major international rating agencies in terms of annualised probabilities of default. We use data from Standard & Poor’s and Moody’s publicly available rating histories to construct confidence intervals for the level of probability of default to be associated with the single “A” rating. The focus on the single A rating level is not accidental, as this is the credit quality level at which the Eurosystem considers financial assets to be eligible collateral for its monetary policy operations. The second aim is to review various existing validation models for the probability of default which enable the analyst to check the ability of credit assessment systems to forecast future default events. Within this context the paper proposes a simple mechanism for the comparison of the performance of major rating agencies and that of other credit assessment systems, such as the internal ratings-based systems of commercial banks under the Basel II regime. This is done to provide a simple validation yardstick to help in the monitoring of the performance of the different credit assessment systems participating in the assessment of eligible collateral underlying Eurosystem monetary policy operations. Contrary to the widely used confidence interval approach, our proposal, based on an interpretation of p-values as frequencies, guarantees a convergence to an ex ante fixed probability of default (PD) value. Given the general characteristics of the problem considered, we consider this simple mechanism to also be applicable in other contexts.credit risk, rating, probability of default (PD), performance checking, backtesting

    Backtesting of a credit scoring system under the current regulatory framework

    Get PDF
    Mestrado em Ciências ActuariaisDesde a implementação do atual acordo de supervisão financeira internacional, os bancos podem usar as suas estimativas internas de avaliação de risco de crédito como base para o cálculo dos ponderadores de risco e requisitos de capital. Consequentemente, com vista a assegurar a estabilidade e solvabilidade das instituições de crédito, torna-se crescente a necessidade de um sistema de validação robusto, para garantir a consistência e precisão dos sistemas de notação interna. Existem vários estudos sobre o processo de validação de estimativas internas. No entanto, aprofundamento e acordo nesta matéria são ainda insuficientes, nomeadamente no que diz respeito à avaliação da precisão das estimativas internas para os parâmetros de risco de crédito, com o objectivo de atingir a estabilidade dos requisitos de capital. A calibração das probabilidades de incumprimento representa um dos procedimentos de validação quantitativa inerentes ao exercício de backtesting. Neste trabalho, será explorado o processo de calibração das probabilidades de incumprimento recorrendo a um modelo de scoring para exemplificar como é feita a avaliação da capacidade preditiva destas estimativas internas numa carteira de Crédito à Habitação. Para superar o desafio de desenvolver um sistema de validação adequado, o presente projeto tem em consideração o atual e amplo quadro regulatório proveniente do Comité de Basileia para a Supervisão Bancária (BCBS) e da Autoridade Bancária Europeia (EBA), alguns artigos relevantes nesta matéria e aquelas que são consideradas as melhores práticas de gestão do risco de crédito.Since the implementation of the current regulatory framework within the global financial system, banks are allowed to rely in a system using their own estimates for credit risk parameters as inputs for the calculation of risk weights and capital requirements. Consequently, in order to assure the stability and soundness of credit institutions, the need for a robust validation system to ensure accuracy and consistency of internal rating systems is greater than ever before. Although several studies on validation processes already exist, a deeper understanding and agreement on this subject is required, namely in what concerns the accuracy assessment of internal estimates for credit risk parameters, in order to achieve capital requirements stability. Calibration of default probabilities represents one of the quantitative validatio procedures underlying the exercise of backtesting that must be performed on a regular basis. The present text discusses the probability of default (PD) calibration process using a scoring model to illustrate the assessment of the predictive power of these internal estimates in a residential mortgage portfolio. To overcome the challenge of developing an adequate validation scheme in compliance with the current regulatory framework, this project project keeps in mind the legislation from Basel Committee on Banking Supervision (BCBS) and European Banking Authority (EBA), some relevant studies developed on this subject and those that are consider to be the best practices of credit risk management.info:eu-repo/semantics/publishedVersio

    A proposed framework for backtesting loss given default models

    Full text link

    Basel II implementation - retail credit risk mitigation

    Get PDF

    Pricing and Hedging Illiquid Energy Derivatives:an Application to the JCC Index

    Get PDF
    In this paper we discuss a simple econometric strategy for pricing and hedging illiquid financial products, such as the Japanese crude oil cocktail (JCC) index, the most popular OTC energy derivative in Japan. First, we review the existing literature for computing optimal hedge ratios (OHR) and we propose a critical classification of the existing approaches. Second, we compare the empirical performance of different econometric models (namely, regression models in price-levels, price first differences, price returns, as well as error correction and autoregressive distributed lag models) in terms of their computed OHR using monthly data on the JCC over the period January 2000-January 2006. Third, we illustrate and implement a procedure to cross-hedge and price two different swaps on the JCC: a one-month swap and a three-month swap with a variable oil volume. We explain how to compute a bid/ask spread and to construct the hedging position for the JCC swap. Fourth, we evaluate our swap pricing scheme with backtesting and rolling regression techniques. Our empirical findings show that it is not necessary to use sophisticated econometric techniques, since the price level regression model permits to compute a more reliable optimal hedge ratio relative to its competing alternatives.Hedging Models, Cross-Hedging, Energy Derivatives, Illiquid Financial Products, Commodity Markets, JCC Price Index
    corecore