992 research outputs found

    Estimating the Counterparty Risk Exposure by using the Brownian Motion Local Time

    Full text link
    In recent years, the counterparty credit risk measure, namely the default risk in \emph{Over The Counter} (OTC) derivatives contracts, has received great attention by banking regulators, specifically within the frameworks of \emph{Basel II} and \emph{Basel III.} More explicitly, to obtain the related risk figures, one has first obliged to compute intermediate output functionals related to the \emph{Mark-to-Market} (MtM) position at a given time t∈[0,T],t \in [0, T], T being a positive, and finite, time horizon. The latter implies an enormous amount of computational effort is needed, with related highly time consuming procedures to be carried out, turning out into significant costs. To overcome latter issue, we propose a smart exploitation of the properties of the (local) time spent by the Brownian motion close to a given value

    Using M-quantile models as an alternative to random effects to model the contextual value-added of schools in London

    Get PDF
    The measurement of school performance for secondary schools in England has developed from simple measures of marginal performance at age 16 to more complex contextual value-added measures that account for pupil prior attainment and background. These models have been developed within the multilevel modelling environment (pupils within schools) but in this paper we propose an alternative using a more robust approach based on M-quantile modelling of individual pupil efficiency. These efficiency measures condition on a pupils ability and background, as do the current contextual value-added models, but as they are measured at the pupil level a variety of performance measures can be readily produced at the school and higher (local authority) levels. Standard errors for the performance measures are provided via a bootstrap approach, which is validated using a model-based simulation.School Performance, Contextual Value-Added, M-Quantile Models, Pupil Efficiency, London

    Efficient estimation of sensitivities for counterparty credit risk with the finite difference Monte Carlo method

    Get PDF
    According to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method for the estimation of CVA and its sensitivities for a portfolio of financial derivatives. We use the finite difference Monte Carlo (FDMC) method to measure exposure profiles and consider the computationally challenging case of foreign exchange barrier options in the context of the Black–Scholes as well as the Heston stochastic volatility model, with and without stochastic domestic interest rate, for a wide range of parameters. In the case of a fixed domestic interest rate, our results show that FDMC is an accurate method compared with the semi-analytic COS method and, advantageously, can compute multiple options on one grid. In the more general case of a stochastic domestic interest rate, we show that we can accurately compute exposures of discontinuous one-touch options by using a linear interpolation technique as well as sensitivities with respect to initial interest rate and variance. This paves the way for real portfolio level risk analysis

    A subordinated CIR intensity model with application to Wrong-Way risk CVA

    Full text link
    Credit Valuation Adjustment (CVA) pricing models need to be both flexible and tractable. The survival probability has to be known in closed form (for calibration purposes), the model should be able to fit any valid Credit Default Swap (CDS) curve, should lead to large volatilities (in line with CDS options) and finally should be able to feature significant Wrong-Way Risk (WWR) impact. The Cox-Ingersoll-Ross model (CIR) combined with independent positive jumps and deterministic shift (JCIR++) is a very good candidate : the variance (and thus covariance with exposure, i.e. WWR) can be increased with the jumps, whereas the calibration constraint is achieved via the shift. In practice however, there is a strong limit on the model parameters that can be chosen, and thus on the resulting WWR impact. This is because only non-negative shifts are allowed for consistency reasons, whereas the upwards jumps of the JCIR++ need to be compensated by a downward shift. To limit this problem, we consider the two-side jump model recently introduced by Mendoza-Arriaga \& Linetsky, built by time-changing CIR intensities. In a multivariate setup like CVA, time-changing the intensity partly kills the potential correlation with the exposure process and destroys WWR impact. Moreover, it can introduce a forward looking effect that can lead to arbitrage opportunities. In this paper, we use the time-changed CIR process in a way that the above issues are avoided. We show that the resulting process allows to introduce a large WWR effect compared to the JCIR++ model. The computation cost of the resulting Monte Carlo framework is reduced by using an adaptive control variate procedure
    • …
    corecore