22 research outputs found

    Quantitative Validation: An Overview and Framework for PD Backtesting and Benchmarking.

    Get PDF
    The aim of credit risk models is to identify and quantify future outcomes of a set of risk measurements. In other words, the model's purpose is to provide as good an approximation as possible of what constitutes the true underlying risk relationship between a set of inputs and a target variable. These parameters are used for regulatory capital calculations to determine the capital needed that serves a buffer to protect depositors in adverse economic conditions. In order to manage model risk, financial institutions need to set up validation processes so as to monitor the quality of the models on an ongoing basis. Validation is important to inform all stakeholders (e.g. board of directors, senior management, regulators, investors, borrowers, ā€¦) and as such allow them to make better decisions. Validation can be considered from both a quantitative and qualitative point of view. Backtesting and benchmarking are key quantitative validation tools. In backtesting, the predicted risk measurements (PD, LGD, CCF) will be contrasted with observed measurements using a workbench of available test statistics to evaluate the calibration, discrimination and stability of the model. A timely detection of reduced performance is crucial since it directly impacts profitability and risk management strategies. The aim of benchmarking is to compare internal risk measurements with external risk measurements so to allow to better gauge the quality of the internal rating system. This paper will focus on the quantitative PD validation process within a Basel II context. We will set forth a traffic light indicator approach that employs all relevant statistical tests to quantitatively validate the used PD model, and document this complete approach with a reallife case-study.Framework; Benchmarking; Credit; Credit scoring; Control;

    A New Approximation Method for the Shapley Value Applied to the WTC 9/11 Terrorist Attack

    Get PDF
    The Shapley value (Shapley (1953)) is one of the most prominent one-point solution concepts in cooperative game theory that divides revenues (or cost, power) that can be obtained by cooperation of players in the game. The Shapley value is mathematically characterized by properties that have appealing real-world interpretations and hence its use in practical settings is easily justified.The down part is that its computational complexity increases exponentially with the number of players in the game. Therefore, in practical problems that consist of more that 25 players the calculation of the Shapley value is usually too time expensive. Among others the Shapley value is applied in the analysis of terrorist networks (cf. Lindelauf et al. (2013)) which generally extend beyond the size of 25 players. In this paper we therefore present a new method to approximate the Shapley value by refining the random sampling method introduced by Castro et al. (2009). We show that our method outperforms the random sampling method, reducing the average error in the Shapley value approximation by almost 30%. Moreover, our new method enables us to analyze the extended WTC 9/11 network of Krebs (2002) that consists of 69 members. This in contrast to the restricted WTC 9/11 network considered in Lindelauf et al. (2013), that only considered the operational cells consisting of the 19 hijackers that conducted theattack

    A New Approximation Method for the Shapley Value Applied to the WTC 9/11 Terrorist Attack

    Get PDF
    The Shapley value (Shapley (1953)) is one of the most prominent one-point solution concepts in cooperative game theory that divides revenues (or cost, power) that can be obtained by cooperation of players in the game. The Shapley value is mathematically characterized by properties that have appealing real-world interpretations and hence its use in practical settings is easily justified.The down part is that its computational complexity increases exponentially with the number of players in the game. Therefore, in practical problems that consist of more that 25 players the calculation of the Shapley value is usually too time expensive. Among others the Shapley value is applied in the analysis of terrorist networks (cf. Lindelauf et al. (2013)) which generally extend beyond the size of 25 players. In this paper we therefore present a new method to approximate the Shapley value by refining the random sampling method introduced by Castro et al. (2009). We show that our method outperforms the random sampling method, reducing the average error in the Shapley value approximation by almost 30%. Moreover, our new method enables us to analyze the extended WTC 9/11 network of Krebs (2002) that consists of 69 members. This in contrast to the restricted WTC 9/11 network considered in Lindelauf et al. (2013), that only considered the operational cells consisting of the 19 hijackers that conducted theattack

    A New Approximation Method for the Shapley Value Applied to the WTC 9/11 Terrorist Attack

    Get PDF
    The Shapley value (Shapley (1953)) is one of the most prominent one-point solution concepts in cooperative game theory that divides revenues (or cost, power) that can be obtained by cooperation of players in the game. The Shapley value is mathematically characterized by properties that have appealing real-world interpretations and hence its use in practical settings is easily justified. The down part is that its computational complexity increases exponentially with the number of players in the game. Therefore, in practical problems that consist of more that 25 players the calculation of the Shapley value is usually too time expensive. Among others the Shapley value is applied in the analysis of terrorist networks (cf. Lindelauf et al. (2013)) which generally extend beyond the size of 25 players. In this paper we therefore present a new method to approximate the Shapley value by refining the random sampling method introduced by Castro et al. (2009). We show that our method outperforms the random sampling method, reducing the average error in the Shapley value approximation by almost 30%. Moreover, our new method enables us to analyze the extended WTC 9/11 network of Krebs (2002) that consists of 69 members. This in contrast to the restricted WTC 9/11 network considered in Lindelauf et al. (2013), that only considered the operational cells consisting of the 19 hijackers that conducted the attack

    Identification of major dioxin-like compounds and androgen receptor antagonist in acid-treated tissue extracts of high trophic-level animals

    Get PDF
    We evaluated the applicability of combining in vitro bioassays with instrument analyses to identify potential endocrine disrupting pollutants in sulfuric acid-treated extracts of liver and/or blubber of high trophic-level animals. Dioxin-like and androgen receptor (AR) antagonistic activities were observed in Baikal seals, common cormorants, raccoon dogs, and finless porpoises by using a panel of rat and human cell-based chemical-activated luciferase gene expression (CALUX) reporter gene bioassays. On the other hand, no activity was detected in estrogen receptor Ī± (ERĪ±)-, glucocorticoid receptor (GR)-, progesterone receptor (PR)-, and peroxisome proliferator-activated receptor Ī³2 (PPARĪ³2)-CALUX assays with the sample amount applied. All individual samples (n = 66) showed dioxin-like activity, with values ranging from 21 to 5500 pg CALUX-2,3,7,8-tetrachlorodibenzo-p-dioxin equivalent (TEQ)/g-lipid. Because dioxins are expected to be strong contributors to CALUX-TEQs, the median theoretical contribution of dioxins calculated from the result of chemical analysis to the experimental CALUX-TEQs was estimated to explain up to 130% for all the tested samples (n = 54). Baikal seal extracts (n = 31), but not other extracts, induced AR antagonistic activities that were 8-150 Ī¼g CALUX-flutamide equivalent (FluEQ)/g-lipid. p,pā€²-DDE was identified as an important causative compound for the activity, and its median theoretical contribution to the experimental CALUX-FluEQs was 59% for the tested Baikal seal tissues (n = 25). Our results demonstrate that combining in vitro CALUX assays with instrument analysis is useful for identifying persistent organic pollutant-like compounds in the tissue of wild animals on the basis of in vitro endocrine disruption toxicity. Ā© 2011 American Chemical Society

    An overview and framework for PD backtesting and benchmarking

    No full text
    In order to manage model risk, financial institutions need to set up validation processes so as to monitor the quality of the models on an ongoing basis. Validation can be considered from both a quantitative and qualitative point of view. Backtesting and benchmarking are key quantitative validation tools, and the focus of this paper. In backtesting, the predicted risk measurements (PD, LGD, EAD) will be contrasted with observed measurements using a workbench of available test statistics to evaluate the calibration, discrimination and stability of the model. A timely detection of reduced performance is crucial since it directly impacts profitability and risk management strategies. The aim of benchmarking is to compare internal risk measurements with external risk measurements so as to better gauge the quality of the internal rating system. This paper will focus on the quantitative PD validation process within a Basel II context. We will set forth a traffic light indicator approach that employs all relevant statistical tests to quantitatively validate the used PD model, and document this approach with a real-life case study. The set forth methodology and tests are the summary of the authors' statistical expertise and experience of world-wide observed business practices

    Coupled Transductive Ensemble Learning of Kernel Models

    No full text
    In this paper we propose the concept of coupling for ensemble learning. In the existing literature, all submodels that are considered within an ensemble are trained independently from each other. Here we study the e#ect of coupling the individual training processes within an ensemble of regularization networks. The considered coupling set gives the opportunity to work with a transductive set for both regression and classification problems
    corecore