3,464 research outputs found

    A factor model for joint default probabilities. Pricing of CDS, index swaps and index tranches

    Get PDF
    A factor model is proposed for the valuation of credit default swaps, credit indices and CDO contracts. The model of default is based on the first-passage distribution of a Brownian motion time modified by a continuous time-change. Various model specifications fall under this general approach based on defining the credit-quality process as an innovative time-change of a standard Brownian motion where the volatility process is mean reverting Lévy driven OU type process. Our models are bottom-up and can account for sudden moves in the level of CDS spreads representing the so-called credit gap risk. We develop FFT computational tools for calculating the distribution of losses and we show how to apply them to several specifications of the time-changed Brownian motion. Our line of modelling is flexible enough to facilitate the derivation of analytical formulae for conditional probabilities of default and prices of credit derivatives

    Levy subordinator model: A two parameter model of default dependency

    Get PDF
    The May 2005 crisis and the recent credit crisis have indicated to us that any realistic model of default dependency needs to account for at least two risk factors, firm-specific and catastrophic. Unfortunately, the popular Gaussian copula model has no identifiable support to either of these. In this article, a two parameter model of default dependency based on the Levy subordinator is presented accounting for these two risk factors. Subordinators are Levy processes with non-decreasing sample paths. They help ensure that the loss process is non-decreasing leading to a promising class of dynamic models. The simplest subordinator is the Levy subordinator, a maximally skewed stable process with index of stability 1/2. Interestingly, this simplest subordinator turns out to be the appropriate choice as the basic process in modeling default dependency. Its attractive feature is that it admits a closed form expression for its distribution function. This helps in automatic calibration to individual hazard rate curves and efficient pricing with Fast Fourier Transform techniques. It is structured similar to the one-factor Gaussian copula model and can easily be implemented within the framework of the existing infrastructure. As it turns out, the Gaussian copula model can itself be recast into this framework highlighting its limitations. The model can also be investigated numerically with a Monte Carlo simulation algorithm. It admits a tractable framework of random recovery. It is investigated numerically and the implied base correlations are presented over a wide range of its parameters. The investigation also demonstrates its ability to generate reasonable hedge ratios

    Measuring Financial Cash Flow and Term Structure Dynamics

    Get PDF
    Financial turbulence is a phenomenon occurring in anti - persistent markets. In contrast, financial crises occur in persistent markets. A relationship can be established between these two extreme phenomena of long term market dependence and the older financial concept of financial (il-)liquidity. The measurement of the degree of market persistence and the measurement of the degree of market liquidity are related. To accomplish the two research objectives of measurement and simulation of different degrees of financial liquidity, I propose to boldly reformulate and reinterpret the classical laws of fluid mechanics into cash flow mechanics. At first this approach may appear contrived and artificial, but the end results of these reformulations and reinterpretations are useful quantifiable financial quantities, which will assist us with the measurement, analysis and proper characterization of modern dynamic financial markets in ways that classical comparative static financial - \ economic analyses do not allow.Financial Cash Flow, Term Structure

    A Survey on Quantum Computational Finance for Derivatives Pricing and VaR

    Get PDF
    [Abstract]: We review the state of the art and recent advances in quantum computing applied to derivative pricing and the computation of risk estimators like Value at Risk. After a brief description of the financial derivatives, we first review the main models and numerical techniques employed to assess their value and risk on classical computers. We then describe some of the most popular quantum algorithms for pricing and VaR. Finally, we discuss the main remaining challenges for the quantum algorithms to achieve their potential advantages.Xunta de Galicia; ED431G 2019/01All authors acknowledge the European Project NExt ApplicationS of Quantum Computing (NEASQC), funded by Horizon 2020 Program inside the call H2020-FETFLAG-2020-01 (Grant Agreement 951821). Á. Leitao, A. Manzano and C. Vázquez wish to acknowledge the support received from the Centro de Investigación de Galicia “CITIC”, funded by Xunta de Galicia and the European Union (European Regional Development Fund- Galicia 2014-2020 Program), by Grant ED431G 2019/01

    Market and Counterparty Credit Risk: Selected Computational and Managerial Aspects

    Get PDF
    The thesis can be placed within the literature on market and counterparty credit risk, contributing along the following three dimensions: 1. Interest rate risk (IRR) management: The thesis starts with an overview on asset liability management (ALM) in general and IRR management in particular. It then gives a novel procedure for structuring swap overlays for pensions funds, allowing for optimal hedging of IRR without affecting the strategic asset allocation (SAA). The thesis also offers an extension of the analysis of the Cairns (2004) stochastic interest rate model Deriving respective model-based sensitivity measures (Cairns deltas). It finally applies the model to a practical application, analyzing it when it comes to long-term contracts. 2. Pricing and managing counterparty credit risk A compact overview on counterparty credit risk (CCR) and credit valuation adjustment (CVA) is given. This is followed by a unique analyses around valuation, relevant accounting and regulatory requirements as well as pricing and mitigation, esp. Illustration of how CVA capital charge shows the tautology behind the discussions around regulatory requirements. The thesis contributes also to the discourse on debt valuation adjustment (DVA), e.g. showing that some aspects are not that unintuitive as presumed (e.g. DVA is being priced). 3. CVA modeling and wrong way risk. The thesis gives an overview on credit risk modeling in general and credit spreads in particular. It especially revisits the CVA for CDS model introduced by Brigo and Capponi (2010), giving e.g. A) a step-by-step implementation guide, esp. w.r.t to parts Brigo and Capponi (2010) left open; B) a computational tune-up, incl. a demonstration of its robustness across a variety of scenarios, and a realistic case study; C) a novel analysis of the Brigo & Capponi (2010) model in particular and CVA modeling in genera

    Essays in Quantitative Risk Management for Financial Regulation of Operational Risk Models

    Get PDF
    An extensive amount of evolving guidance and rules are provided to banks by financial regulators. A particular set of instructions outline requirements to calculate and set aside loss-absorbing regulatory capital to ensure the solvency of a bank. Mathematical models are typically used by banks to quantify sufficient amounts of capital. In this thesis, we explore areas that advance our knowledge in regulatory risk management. In the first essay, we explore an aspect of operational risk loss modeling using scenario analysis. An actuarial modeling method is typically used to quantify a baseline capital value which is then layered with a judgemental component in order to account for and integrate what-if future potential losses into the model. We propose a method from digital signal processing using the convolution operator that views the problem of the blending of two signals. That is, a baseline loss distribution obtained from the modeling of frequency and severity of internal losses is combined with a probability distribution obtained from scenario responses to yield a final output that integrates both sets of information. In the second essay, we revisit scenario analysis and the potential impact of catastrophic events to that of the enterprise level of a bank. We generalize an algorithm to account for multiple level of intensities of events together with unique loss profiles depending on the business units effected. In the third essay, we investigate the problem of allocating aggregate capital across sub-portfolios in a fair manner when there are various forms of interdependencies. Relevant to areas of market, credit and operational risk, the multivariate shortfall allocation problem quantifies the optimal amount of capital needed to ensure that the expected loss under a convex loss penalty function remains bounded by a threshold. We first provide an application of the existing methodology to a subset of high frequency loss cells. Lastly, we provide an extension using copula models which allows for the modeling of joint fat-tailed events or asymmetries in the underlying process

    Haar Wavelets-Based Methods for Credit Risk Portfolio Modeling

    Get PDF
    In this dissertation we have investigated the credit risk measurement of a credit portfolio by means of the wavelets theory. Banks became subject to regulatory capital requirements under Basel Accords and also to the supervisory review process of capital adequacy, this is the economic capital. Concentration risks in credit portfolios arise from an unequal distribution of loans to single borrowers (name concentration) or different industry or regional sectors (sector concentration) and may lead banks to face bankruptcy. The Merton model is the basis of the Basel II approach, it is a Gaussian one-factor model such that default events are driven by a latent common factor that is assumed to follow the Gaussian distribution. Under this model, loss only occurs when an obligor defaults in a fixed time horizon. If we assume certain homogeneity conditions, this one-factor model leads to a simple analytical asymptotic approximation of the loss distribution function and VaR. The VaR value at a high confidence level is the measure chosen in Basel II to calculate regulatory capital. This approximation, usually called Asymptotic Single Risk Factor model (ASRF), works well for a large number of small exposures but can underestimates risks in the presence of exposure concentrations. Then, the ASRF model does not provide an appropriate quantitative framework for the computation of economic capital. Monte Carlo simulation is a standard method for measuring credit portfolio risk in order to deal with concentration risks. However, this method is very time consuming when the size of the portfolio increases, making the computation unworkable in many situations. In summary, credit risk managers are interested in how can concentration risk be quantified in short times and how can the contributions of individual transactions to the total risk be computed. Since the loss variable can take only a finite number of discrete values, the cumulative distribution function (CDF) is discontinuous and then the Haar wavelets are particularly well-suited for this stepped-shape functions. For this reason, we have developed a new method for numerically inverting the Laplace transform of the density function, once we have approximated the CDF by a finite sum of Haar wavelet basis functions. Wavelets are used in mathematical analysis to denote a kind of orthonormal basis with remarkable approximation properties. The difference between the usual sine wave and a wavelet may be described by the localization property, while the sine wave is localized in frequency domain but not in time domain, a wavelet is localized in both, frequency and time domain. Once the CDF has been computed, we are able to calculate the VaR at a high loss level. Furthermore, we have computed also the Expected Shortfall (ES), since VaR is not a coherent risk measure in the sense that it is not sub-additive. We have shown that, in a wide variety of portfolios, these measures are fast and accurately computed with a relative error lower than 1% when compared with Monte Carlo. We have also extended this methodology to the estimation of the risk contributions to the VaR and the ES, by taking partial derivatives with respect to the exposures, obtaining again high accuracy. Some technical improvements have also been implemented in the computation of the Gauss-Hermite integration formula in order to get the coefficients of the approximation, making the method faster while the accuracy remains. Finally, we have extended the wavelet approximation method to the multi-factor setting by means of Monte Carlo and quasi-Monte Carlo methods

    Univariate Potential Output Estimations for Hungary

    Get PDF
    Potential output figures are important ingredients of many macroeconomic models and are routinely applied by policy makers and global agencies. Despite its widespread use, estimation of potential output is at best uncertain and depends heavily on the model. The task of estimating potential output is an even more dubious exercise for countries experiencing huge structural changes, such as transition countries. In this paper we apply univariate methods to estimate and evaluate Hungarian potential output, paying special attention to structural breaks. In addition to statistical evaluation, we also assess the appropriateness of various methods by expertise judgement of the results, since we argue that mechanical adoption of univariate techniques might led to erroneous interpretation of the business cycle. As all methods have strengths and weaknesses, we derive a single measure of potential output by weighting those methods that pass both the statistical and expertise criteria. As standard errors, which might be used for deriving weights, are not available for some of the methods, we base our weights on similar but computable statistics, namely on revisions of the output gap for all dates by recursively estimating the models. Finally, we compare our estimated gaps with the result of the only published Hungarian output gap measure of Darvas-Simon (2000b), which is based on an economic model.combination, detrending, output gap, revision.
    corecore