7,875 research outputs found

    Empirical Implementation of a 2-Factor Structural Model for Loss-Given-Default

    Get PDF
    In this study we develop a theoretical model for ultimate loss-given default in the Merton (1974) structural credit risk model framework, deriving compound option formulae to model differential seniority of instruments, and incorporating an optimal foreclosure threshold. We consider an extension that allows for an independent recovery rate process, representing undiversifiable recovery risk, having a stochastic drift. The comparative statics of this model are analyzed and compared and in the empirical exercise, we calibrate the models to observed LGDs on bonds and loans having both trading prices at default and at resolution of default, utilizing an extensive sample of losses on defaulted firms (Moody’s Ultimate Recovery Database™), 800 defaults in the period 1987-2008 that are largely representative of the U.S. large corporate loss experience, for which we have the complete capital structures and can track the recoveries on all instruments from the time of default to the time of resolution. We find that parameter estimates vary significantly across recovery segments, that the estimated volatilities of recovery rates and of their drifts are increasing in seniority (bank loans versus bonds). We also find that the component of total recovery volatility attributable to the LGD-side (as opposed to the PD-side) systematic factor is greater for higher ranked instruments and that more senior instruments have lower default risk, higher recovery rate return and volatility, as well as greater correlation between PD and LGD. Analyzing the implications of our model for the quantification of downturn LGD, we find the ratio of the later to ELGD (the “LGD markup”) to be declining in expected LGD, but uniformly higher for lower ranked instruments or for higher PD-LGD correlation. Finally, we validate the model in an out-of-sample bootstrap exercise, comparing it to a high-dimensional regression model and to a non-parametric benchmark based upon the same data, where we find our model to compare favorably. We conclude that our model is worthy of consideration to risk managers, as well as supervisors concerned with advanced IRB under the Basel II capital accord.LGD; credit risk; default; structural model

    Empirical Analysis and Trading Strategies for Defaulted Debt Securities with Models for Risk and Investment Management

    Get PDF
    This study empirically analyzes the historical performance of defaulted debt from Moody’s Ultimate Recovery Database (1987-2010). Motivated by a stylized structural model of credit risk with systematic recovery risk, we argue and find evidence that returns on defaulted debt co-vary with determinants of the market risk premium, firm specific and structural factors. Defaulted debt returns in our sample are observed to be increasing in collateral quality or debt cushion of the issue. Returns are also increasing for issuers having superior ratings at origination, more leverage at default, higher cumulative abnormal returns on equity prior to default, or greater market implied loss severity at default. Considering systematic factors, returns on defaulted debt are positively related to equity market indices and industry default rates. On the other hand, defaulted debt returns decrease with short-term interest rates. In a rolling out-of-time and out-of-sample resampling experiment we show that our leading model exhibits superior performance. We also document the economic significance of these results through excess abnormal returns, implementing a hypothetical trading strategy, of around 5-6% (2-3%) assuming zero (1bp per month) round-trip transaction costs. These results are of practical relevance to investors and risk managers in this segment of the fixed income market.Distressed Debt; Recoveries; Default; Credit Risk

    Design and implementation of WCET analyses : including a case study on multi-core processors with shared buses

    Get PDF
    For safety-critical real-time embedded systems, the worst-case execution time (WCET) analysis — determining an upper bound on the possible execution times of a program — is an important part of the system verification. Multi-core processors share resources (e.g. buses and caches) between multiple processor cores and, thus, complicate the WCET analysis as the execution times of a program executed on one processor core significantly depend on the programs executed in parallel on the concurrent cores. We refer to this phenomenon as shared-resource interference. This thesis proposes a novel way of modeling shared-resource interference during WCET analysis. It enables an efficient analysis — as it only considers one processor core at a time — and it is sound for hardware platforms exhibiting timing anomalies. Moreover, this thesis demonstrates how to realize a timing-compositional verification on top of the proposed modeling scheme. In this way, this thesis closes the gap between modern hardware platforms, which exhibit timing anomalies, and existing schedulability analyses, which rely on timing compositionality. In addition, this thesis proposes a novel method for calculating an upper bound on the amount of interference that a given processor core can generate in any time interval of at most a given length. Our experiments demonstrate that the novel method is more precise than existing methods.Die Analyse der maximalen Ausführungszeit (Worst-Case-Execution-Time-Analyse, WCET-Analyse) ist für eingebettete Echtzeit-Computer-Systeme in sicherheitskritischen Anwendungsbereichen unerlässlich. Mehrkernprozessoren erschweren die WCET-Analyse, da einige ihrer Hardware-Komponenten von mehreren Prozessorkernen gemeinsam genutzt werden und die Ausführungszeit eines Programmes somit vom Verhalten mehrerer Kerne abhängt. Wir bezeichnen dies als Interferenz durch gemeinsam genutzte Komponenten. Die vorliegende Arbeit schlägt eine neuartige Modellierung dieser Interferenz während der WCET-Analyse vor. Der vorgestellte Ansatz ist effizient und führt auch für Computer-Systeme mit Zeitanomalien zu korrekten Ergebnissen. Darüber hinaus zeigt diese Arbeit, wie ein zeitkompositionales Verfahren auf Basis der vorgestellten Modellierung umgesetzt werden kann. Auf diese Weise schließt diese Arbeit die Lücke zwischen modernen Mikroarchitekturen, die Zeitanomalien aufweisen, und den existierenden Planbarkeitsanalysen, die sich alle auf die Kompositionalität des Zeitverhaltens verlassen. Außerdem stellt die vorliegende Arbeit ein neues Verfahren zur Berechnung einer oberen Schranke der Menge an Interferenz vor, die ein bestimmter Prozessorkern in einem beliebigen Zeitintervall einer gegebenen Länge höchstens erzeugen kann. Unsere Experimente zeigen, dass das vorgestellte Berechnungsverfahren präziser ist als die existierenden Verfahren.Deutsche Forschungsgemeinschaft (DFG) as part of the Transregional Collaborative Research Centre SFB/TR 14 (AVACS

    Personalized anticoagulant management using reinforcement learning.

    Get PDF
    Introduction: There are many problems with current state-of-the-art protocols for maintenance dosing of the oral anticoagulant agent warfarin used in clinical practice. The two key challenges include lack of personalized dose adjustment and the high cost of monitoring the efficacy of the therapy in the form of International Normalized Ratio (INR) measurements. A new dosing algorithm based on the principles of Reinforcement Learning (RL), specifically Q-Learning with functional policy approximation, was created to personalize maintenance dosing of warfarin based on observed INR and to optimize the length of time between INR measurements. This new method will help improve patient’s INR time in therapeutic range (TTR) as well as minimize cost associated with monitoring INR when compared to the current standard of care. Procedure: Using the principles of Reinforcement Learning, an algorithm to control warfarin dosing was created. The algorithm uses 9 different controllers which correspond to 9 different levels of warfarin sensitivity. The algorithm switches between controllers until it selects the controller that most closely resembles the individual patient’s response, and thus the optimal dose change (?Dose) and time between INR measurements (?Time) are personalized for each patient, based on INR observed in the patient. Three simulations were performed using data from 100 artificial patients, generated based on data from real patients, each. The first simulation that was performed was an ideal case scenario (clean simulation where the coefficient of variance (CV) of noise added to the model output = 0) using only the warfarin RL algorithm to prove efficacy. The second simulation was performed using the current standard of care and a CV = 25% to simulate intra-patient variability. The third simulation was performed using the warfarin RL algorithm with a CV = 25%. 180 days were simulated for each patient in each simulation and the measurements that were used to benchmark the efficacy of the therapy were INR time in therapeutic range (TTR) and the number of INR measurements that were taken during simulation. Results: The first simulation yielded a mean TTR = 92.1% with a standard deviation of 4.2%, and had a mean number of INR measurements = 7.94 measurements/patient. The second simulation yielded a mean TTR = 45.3% with a standard deviation of 16.4%, and had a mean number of INR measurements = 12.3 measurements/patient. The third simulation yielded a mean TTR = 51.8% with a standard deviation of 10.8%, and had a mean number of INR measurements = 8.05 measurements/patient. A p-value \u3c .001 suggests that there is a statistically significant difference between the 2 algorithms. Conclusion: Results from the simulations indicate that the warfarin RL algorithm performed better than the standard of care at keeping the patient’s INR in therapeutic range and also reduced the number of INR measurements that were necessary. This algorithm could help improve patient safety by increasing the patient’s INR TTR in the presence of intra-patient variability, and also help reduce the heavy cost associated with the therapy by minimizing the number of INR measurements that are necessary

    Thatcherite mythology: eight Tory leadership candidates in search of an economic policy

    Get PDF
    Michael Jacobs discusses the alternative versions of Thatcherite economics being offered by the Conservative leadership candidates, which do not reflect the actual economic needs of the country today
    • …
    corecore