127 research outputs found

    Capital allocation for credit portfolios with kernel estimators

    Full text link
    Determining contributions by sub-portfolios or single exposures to portfolio-wide economic capital for credit risk is an important risk measurement task. Often economic capital is measured as Value-at-Risk (VaR) of the portfolio loss distribution. For many of the credit portfolio risk models used in practice, the VaR contributions then have to be estimated from Monte Carlo samples. In the context of a partly continuous loss distribution (i.e. continuous except for a positive point mass on zero), we investigate how to combine kernel estimation methods with importance sampling to achieve more efficient (i.e. less volatile) estimation of VaR contributions.Comment: 22 pages, 12 tables, 1 figure, some amendment

    Estimating Portfolio Risk for Tail Risk Protection Strategies

    Get PDF
    We forecast portfolio risk for managing dynamic tail risk protection strategies, based on extreme value theory, expectile regression, Copula-GARCH and dynamic GAS models. Utilizing a loss function that overcomes the lack of elicitability for Expected Shortfall, we propose a novel Expected Shortfall (and Value-at-Risk) forecast combination approach, which dominates simple and sophisticated standalone models as well as a simple average combination approach in modelling the tail of the portfolio return distribution. While the associated dynamic risk targeting or portfolio insurance strategies provide effective downside protection, the latter strategies suffer less from inferior risk forecasts given the defensive portfolio insurance mechanics

    Non-smooth optimization methods for computation of the conditional value-at-risk and portfolio optimization

    Get PDF
    We examine numerical performance of various methods of calculation of the Conditional Value-at-risk (CVaR), and portfolio optimization with respect to this risk measure. We concentrate on the method proposed by Rockafellar and Uryasev in (Rockafellar, R.T. and Uryasev, S., 2000, Optimization of conditional value-at-risk. Journal of Risk, 2, 21-41), which converts this problem to that of convex optimization. We compare the use of linear programming techniques against a non-smooth optimization method of the discrete gradient, and establish the supremacy of the latter. We show that non-smooth optimization can be used efficiently for large portfolio optimization, and also examine parallel execution of this method on computer clusters.<br /

    OGLE-2013-BLG-0102LA,B: Microlensing binary with components at star/brown-dwarf and brown-dwarf/planet boundaries

    Get PDF
    We present the analysis of the gravitational microlensing event OGLE-2013-BLG-0102. The light curve of the event is characterized by a strong short-term anomaly superposed on a smoothly varying lensing curve with a moderate magnification Amax1.5A_{\rm max}\sim 1.5. It is found that the event was produced by a binary lens with a mass ratio between the components of q=0.13q = 0.13 and the anomaly was caused by the passage of the source trajectory over a caustic located away from the barycenter of the binary. From the analysis of the effects on the light curve due to the finite size of the source and the parallactic motion of the Earth, the physical parameters of the lens system are determined. The measured masses of the lens components are M1=0.096±0.013 MM_{1} = 0.096 \pm 0.013~M_{\odot} and M2=0.012±0.002 MM_{2} = 0.012 \pm 0.002~M_{\odot}, which correspond to near the hydrogen-burning and deuterium-burning mass limits, respectively. The distance to the lens is 3.04±0.31 kpc3.04 \pm 0.31~{\rm kpc} and the projected separation between the lens components is 0.80±0.08 AU0.80 \pm 0.08~{\rm AU}.Comment: 6 figures, 2 tables, ApJ submitte

    Processing second-order stochastic dominance models using cutting-plane representations

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the links below. Copyright @ 2011 Springer-VerlagSecond-order stochastic dominance (SSD) is widely recognised as an important decision criterion in portfolio selection. Unfortunately, stochastic dominance models are known to be very demanding from a computational point of view. In this paper we consider two classes of models which use SSD as a choice criterion. The first, proposed by Dentcheva and Ruszczyński (J Bank Finance 30:433–451, 2006), uses a SSD constraint, which can be expressed as integrated chance constraints (ICCs). The second, proposed by Roman et al. (Math Program, Ser B 108:541–569, 2006) uses SSD through a multi-objective formulation with CVaR objectives. Cutting plane representations and algorithms were proposed by Klein Haneveld and Van der Vlerk (Comput Manage Sci 3:245–269, 2006) for ICCs, and by Künzi-Bay and Mayer (Comput Manage Sci 3:3–27, 2006) for CVaR minimization. These concepts are taken into consideration to propose representations and solution methods for the above class of SSD based models. We describe a cutting plane based solution algorithm and outline implementation details. A computational study is presented, which demonstrates the effectiveness and the scale-up properties of the solution algorithm, as applied to the SSD model of Roman et al. (Math Program, Ser B 108:541–569, 2006).This study was funded by OTKA, Hungarian National Fund for Scientific Research, project 47340; by Mobile Innovation Centre, Budapest University of Technology, project 2.2; Optirisk Systems, Uxbridge, UK and by BRIEF (Brunel University Research Innovation and Enterprise Fund)

    OGLE-2012-BLG-0455/MOA-2012-BLG-206: Microlensing event with ambiguity in planetary interpretations caused by incomplete coverage of planetary signal

    Get PDF
    Characterizing a microlensing planet is done from modeling an observed lensing light curve. In this process, it is often confronted that solutions of different lensing parameters result in similar light curves, causing difficulties in uniquely interpreting the lens system, and thus understanding the causes of different types of degeneracy is important. In this work, we show that incomplete coverage of a planetary perturbation can result in degenerate solutions even for events where the planetary signal is detected with a high level of statistical significance. We demonstrate the degeneracy for an actually observed event OGLE-2012-BLG-0455/MOA-2012-BLG-206. The peak of this high-magnification event (Amax400)(A_{\rm max}\sim400) exhibits very strong deviation from a point-lens model with Δχ24000\Delta\chi^{2}\gtrsim4000 for data sets with a total number of measurement 6963. From detailed modeling of the light curve, we find that the deviation can be explained by four distinct solutions, i.e., two very different sets of solutions, each with a two-fold degeneracy. While the two-fold (so-called "close/wide") degeneracy is well-understood, the degeneracy between the radically different solutions is not previously known. The model light curves of this degeneracy differ substantially in the parts that were not covered by observation, indicating that the degeneracy is caused by the incomplete coverage of the perturbation. It is expected that the frequency of the degeneracy introduced in this work will be greatly reduced with the improvement of the current lensing survey and follow-up experiments and the advent of new surveys.Comment: 5 pages, 3 figures, ApJ accepte
    corecore