54 research outputs found

    Pricing index-linked catastrophe bonds via Monte Carlo simulation

    Get PDF
    The pricing framework used in this dissertation allows for the specification of catastrophe risk under the real-world measure. This gives the user a great deal of freedom in the assumptions made about the underlying catastrophe risk process (referred to in this dissertation as the aggregate loss process). Therefore, this dissertation aims to shed light on the effect of various assumptions and considerations on index-linked CAT bond prices based on the Property Claims Services (PCS) index. Also, given the lack of a closed-form solution to the pricing formulae used and the lack of a liquidly-traded secondary market, this dissertation compares two approximation methods to evaluate expressions involving the aggregate loss process: Monte Carlo simulation and a mixed-approximation method. The two price-approximation methods are largely consistent and seem to agree particularly in the upper quantiles of the distribution of the aggregate loss process. Another key consideration is that the third-party estimating the catastrophe losses in North America, PCS, only records catastrophe losses above $25 million. This dissertation therefore also explores the issue of left-truncated data and its effect when estimating the parameters of the aggregate loss process. For this purpose, it introduces a non-parametric approach to compare, in sample, the results of ignoring the threshold and taking it into account. In both these exercises, it becomes apparent that very heavy-tailed distributions need to be used with caution. In the former case, the use of very heavy-tailed distributions places restrictions on the distributions that can be used for the mixed-approximation method. Finally, as a more realistic avenue this dissertation proposes a simple stochastic intensity model to compare with the deterministic intensity model and found that, by parsimony, the deterministic intensity seems to provide a reasonable model for the upper quantiles of the aggregate loss process. The key results of this dissertation are that the pricing of CAT bonds depends on the quantiles of the aggregate loss process, as in evident both when comparing the approximation methods and the deterministic and stochastic intensity functions, and that left-truncation should be taken into account when valuing index-linked CAT bonds using data from PCS

    The Estimation of Stochastic Models in Finance with Volatility and Jump Intensity

    Get PDF
    This thesis covers the parametric estimation of models with stochastic volatility, jumps, and stochastic jump intensity, by FFT. The first primary contribution is a parametric minimum relative entropy optimal Q-measure for affine stochastic volatility jump-diffusion (ASVJD). Other attempts in the literature have minimized the relative entropy of Q given P either by nonparametric methods, or by numerical PDEs. These methods are often difficult to implement. We construct the relative entropy of Q given P from the Lebesgue densities under P and Q, respectively, where these can be retrieved by FFT from the closed form log-price characteristic function of any ASVJD model. We proceed by first estimating the fixed parameters of the P-measure by the Approximate Maximum Likelihood (AML) method of Bates (2006), and prove that the integrability conditions required for Fourier inversion are satisfied. Then by using a structure preserving parametric model under the Q-measure, we minimize the relative entropy of Q given P with respect to the model parameters under Q. AML can be used to estimate P within the ASVJD class. Since, AML is much faster than MCMC, our main supporting contributions are to the theory of AML. The second main contribution of this thesis is a non-affine model for time changed jumps with stochastic jump intensity called the Leveraged Jump Intensity (LJI) model. The jump intensity in the LJI model is modeled by the CIR process. Leverage occurs in the LJI model, since the Brownian motion driving the CIR process also appears in the log-price with a negative coefficient. Models with a leverage effect of this type are usually affine, but model the intensity with an Ornstein-Uhlenbeck process. The conditional characteristic function of the LJI log-price given the intensity is known in closed form. Thus, we price LJI call options by conditional Monte Carlo, using the Carr and Madan (1999) FFT formula for conditional pricing

    Modeling and quasi-Monte Carlo simulation of risk in credit portfolios

    Get PDF
    Credit risk is the risk of losing contractually obligated cash flows promised by a counterparty such as a corporation, financial institution, or government due to default on its debt obligations. The need for accurate pricing and hedging of complex credit derivatives and for active management of large credit portfolios calls for an accurate assessment of the risk inherent in the underlying credit portfolios. An important challenge for modeling a credit portfolio is to capture the correlations within the credit portfolio. For very large and homogeneous portfolios, analytic and semi-analytic approaches can be used to derive limiting distributions. However, for portfolios of inhomogeneous default probabilities, default correlations, recovery values, or position sizes, Monte Carlo methods are necessary to capture their underlying dynamic evolutions. Since the feasibility of the Monte Carlo methods is limited by their relatively slow convergence rate, methods to improve the efficiency of simulations for credit portfolios are highly desired. In this dissertation, a comparison of the commonly employed single step models for credit portfolios, referred to as the copula-based default time approach, with our novel applications of multi-step models was made at first. Comparison of simulation results indicates that the dependency structure may be better incorporated by the multi-step models, since the default time models can introduce substantially skewed correlations within credit portfolios, a shortcoming which has become more evident in the recent subprime crisis. Next, to improve the efficiency of simulations, quasi- random sequences were introduced into both the single step and multi-step models by devising several new algorithms involving the Brownian bridge construction and principal component analysis. The simulation results from tests under various scenarios suggest that quasi-Monte Carlo methods can substantially improve simulation effectiveness not only for the problems of computing integrals but also for those of order statistics, indicating significant advantage when calculating a number of risk quantities such as Value at Risk (VaR). Finally, the performance of the simulations based on the above credit portfolio models and the quasi-Monte Carlo methods was examined in the context of modeling and valuation of credit portfolio derivatives. The results suggest that these methods can considerably improve the simulation of complex financial instruments involving portfolio credit risk

    CAT Bonds

    Get PDF
    Versicherungsunternehmen bedienen sich im Rahmen ihres Risikomanagements neben dem traditionellen Vorgang der brancheninternen Rückversicherung zusehends alternativer Risikotransfermethoden, mittels derer versicherungstechnische Risiken auf den Kapitalmarkt übertragen werden. Eine Form dieses Transfers stellen Katastrophenanleihen (CAT Bonds) dar, die als Insurance Linked Securities (ILS) Wertpapiere sind, deren Zahlungsmodalitäten an den Schadenverlauf einer oder mehrerer Katastrophen gekoppelt sind. Tritt das zuvor vertraglich genau definierte Katastrophenereignis ein, verliert der Anleiheinvestor sein Investment zum Teil oder zur Gänze. In Ansehnung des steigenden Trends bei den Schadenszahlen aus Naturkatastrophen scheinen CAT Bonds geeignet zu sein, als Instrumente effizienter Risikoallokation zwischen Versicherungsbranche und Kapitalmarkt die in diesem Zusammenhang auftretenden Risikokonzentrationen und Kapazitätsengpässe zu entschärfen. Zusätzlich kommen Emissionen von CAT Bonds Investoren auf der Suche nach innovativen Anlagestrategien entgegen. Die Beurteilung von CAT Bonds orientiert sich maßgeblich an der Ausgestaltung ihrer strukturellen Merkmale, insbesondere kommt der Wahl des Triggermechanismus entscheidende Bedeutung zu. Daher durchleuchtet die vorliegende Arbeit CAT Bonds – nach einer allgemeinen Darstellung der Charakteristika und der Funktionsweise – hinsichtlich ihrer Marktverbreitung. Im Zuge der Untersuchung des Risikoprofils werden CAT Bonds sowohl als Risikomanagementinstrument für Versicherungsunternehmen als auch als Assetklasse im Portfoliomanagement betrachtet. Die Analyse des Risikos, das einem CAT Bond in einer Verbriefungstransaktion zu Grunde gelegt wird, zeigt, dass Wahrscheinlichkeitsverteilungen von CAT Bond-Renditen mit einer hohen Kurtosis und Linksschiefe stark von der Normalverteilung abweichen. Somit findet man zur Beschreibung des mit CAT Bonds verbundenen Risikos mit traditionellen Risikomaßen kein

    Hawks\u27 Herald -- April 29, 2011

    Get PDF

    Complexity, Emergent Systems and Complex Biological Systems:\ud Complex Systems Theory and Biodynamics. [Edited book by I.C. Baianu, with listed contributors (2011)]

    Get PDF
    An overview is presented of System dynamics, the study of the behaviour of complex systems, Dynamical system in mathematics Dynamic programming in computer science and control theory, Complex systems biology, Neurodynamics and Psychodynamics.\u

    Dependence: From classical copula modeling to neural networks

    Get PDF
    The development of tools to measure and to model dependence in high-dimensional data is of great interest in a wide range of applications including finance, risk management, bioinformatics and environmental sciences. The copula framework, which allows us to extricate the underlying dependence structure of any multivariate distribution from its univariate marginals, has garnered growing popularity over the past few decades. Within the broader context of this framework, we develop several novel statistical methods and tools for analyzing, interpreting and modeling dependence. In the first half of this thesis, we advance classical copula modeling by introducing new dependence measures and parametric dependence models. To that end, we propose a framework for quantifying dependence between random vectors. Using the notion of a collapsing function, we summarize random vectors by single random variables, referred to as collapsed random variables. In the context of this collapsing function framework, we develop various tools to characterize the dependence between random vectors including new measures of association computed from the collapsed random variables, asymptotic results required to construct confidence intervals for these measures, collapsed copulas to analytically summarize the dependence for certain collapsing functions and a graphical assessment of independence between groups of random variables. We explore several suitable collapsing functions in theoretical and empirical settings. To showcase tools derived from our framework, we present data applications in bioinformatics and finance. Furthermore, we contribute to the growing literature on parametric copula modeling by generalizing the class of Archimax copulas (AXCs) to hierarchical Archimax copulas (HAXCs). AXCs are typically used to model the dependence at non-extreme levels while accounting for any asymptotic dependence between extremes. HAXCs then enhance the flexibility of AXCs by their ability to model partial asymmetries. We explore two ways of inducing hierarchies. Furthermore, we present various examples of HAXCs along with their stochastic representations, which are used to establish corresponding sampling algorithms. While the burgeoning research on the construction of parametric copulas has yielded some powerful tools for modeling dependence, the flexibility of these models is already limited in moderately high dimensions and they can often fail to adequately characterize complex dependence structures that arise in real datasets. In the second half of this thesis, we explore utilizing generative neural networks instead of parametric dependence models. In particular, we investigate the use of a type of generative neural network known as the generative moment matching network (GMMN) for two critical dependence modeling tasks. First, we demonstrate how GMMNs can be utilized to generate quasi-random samples from a large variety of multivariate distributions. These GMMN quasi-random samples can then be used to obtain low-variance estimates of quantities of interest. Compared to classical parametric copula methods for multivariate quasi-random sampling, GMMNs provide a more flexible and universal approach. Moreover, we theoretically and numerically corroborate the variance reduction capabilities of GMMN randomized quasi-Monte Carlo estimators. Second, we propose a GMMN--GARCH approach for modeling dependent multivariate time series, where ARMA--GARCH models are utilized to capture the temporal dependence within each univariate marginal time series and GMMNs are used to model the underlying cross-sectional dependence. If the number of marginal time series is large, we embed an intermediate dimension reduction step within our framework. The primary objective of our proposed approach is to produce empirical predictive distributions (EPDs), also known as probabilistic forecasts. In turn, these EPDs are also used to forecast certain risk measures, such as value-at-risk. Furthermore, in the context of modeling yield curves and foreign exchange rate returns, we show that the flexibility of our GMMN--GARCH models leads to better EPDs and risk-measure forecasts, compared to classical copula--GARCH models

    Essays in financial asset pricing

    Get PDF
    Three essays in financial asset pricing are given; one concerning the partial differential equation (PDE) pricing and hedging of a class of continuous/generalized power mean Asian options, via their (optimal) Lie point symmetry groups, leading to practical pricing formulas. The second presents high-frequency predictions of S&P 500 returns via several machine learning models, statistically significantly demonstrating short-horizon market predictability and economically significantly profitable (beyond transaction costs) trading strategies. The third compares profitability between these [(mean) ensemble] strategies and Asian option Δ-hedging, using results of the first. Interpreting bounds on arithmetic Asian option prices as ask and bid values, hedging profitability depends largely on securing prices closer to the bid, and settling midway between the bid and ask, significant profits are consistently accumulated during the years 2004-2016. Ensemble predictive trading the S&P 500 yields comparatively very small returns, despite trading much more frequently. The pricing and hedging of (arithmetic) Asian options are difficult and have spurred several solution approaches, differing in theoretical insight and practicality. Multiple families of exact solutions to relaxed power mean Asian option pricing boundary-value problems are explicitly established, which approximately satisfy the full pricing problem, and in one case, converge to exact solutions under certain parametric restrictions. Corresponding hedging parameters/ Greeks are derived. This family consists of (optimal) invariant solutions, constructed for the corresponding pricing PDEs. Numerical experiments explore this family behaviorally, achieving reliably accurate pricing. The second chapter studies intraday market return predictability. Regularized linear and nonlinear tree-based models enjoy significant predictability. Ensemble models perform best across time and their return predictability realizes economically significant profits with Sharpe ratios after transaction costs of 0.98. These results strongly evidence that intraday market returns are predictable during short time horizons, beyond that explainable by transaction costs. The lagged constituent returns are shown to hold significant predictive information not contained in lagged market returns or price trend and liquidity characteristics. Consistent with the hypothesis that predictability is driven by slow-moving trader capital, predictability decreased post-decimalization, and market returns are more predictable midday, on days with high volatility or illiquidity, and during financial crises

    Outils et modèles pour l'étude de quelques risques spatiaux et en réseaux : application aux extrêmes climatiques et à la contagion en finance

    Get PDF
    This thesis aims at developing tools and models that are relevant for the study of some spatial risks and risks in networks. The thesis is divided into five chapters. The first one is a general introduction containing the state of the art related to each study as well as the main results. Chapter 2 develops a new multi-site precipitation generator. It is crucial to dispose of models able to produce statistically realistic precipitation series. Whereas previously introduced models in the literature deal with daily precipitation, we develop a hourly model. The latter involves only one equation and thus introduces dependence between occurrence and intensity; the aforementioned literature assumes that these processes are independent. Our model contains a common factor taking large scale atmospheric conditions into account and a multivariate autoregressive contagion term accounting for local propagation of rainfall. Despite its relative simplicity, this model shows an impressive ability to reproduce real intensities, lengths of dry periods as well as the spatial dependence structure. In Chapter 3, we propose an estimation method for max-stable processes, based on simulated likelihood techniques. Max-stable processes are ideally suited for the statistical modeling of spatial extremes but their inference is difficult. Indeed the multivariate density function is not available and thus standard likelihood-based estimation methods cannot be applied. Under appropriate assumptions, our estimator is efficient as both the temporal dimension and the number of simulation draws tend towards infinity. This approach by simulation can be used for many classes of max-stable processes and can provide better results than composite-based methods, especially in the case where only a few temporal observations are available and the spatial dependence is highCette thèse s’attache à développer des outils et modèles adaptés a l’étude de certains risques spatiaux et en réseaux. Elle est divisée en cinq chapitres. Le premier consiste en une introduction générale, contenant l’état de l’art au sein duquel s’inscrivent les différents travaux, ainsi que les principaux résultats obtenus. Le Chapitre 2 propose un nouveau générateur de précipitations multi-site. Il est important de disposer de modèles capables de produire des séries de précipitations statistiquement réalistes. Alors que les modèles précédemment introduits dans la littérature concernent essentiellement les précipitations journalières, nous développons un modèle horaire. Il n’implique qu’une seule équation et introduit ainsi une dépendance entre occurrence et intensité, processus souvent considérés comme indépendants dans la littérature. Il comporte un facteur commun prenant en compte les conditions atmosphériques grande échelle et un terme de contagion auto-regressif multivarié, représentant la propagation locale des pluies. Malgré sa relative simplicité, ce modèle reproduit très bien les intensités, les durées de sècheresse ainsi que la dépendance spatiale dans le cas de la Bretagne Nord. Dans le Chapitre 3, nous proposons une méthode d’estimation des processus maxstables, basée sur des techniques de vraisemblance simulée. Les processus max-stables sont très adaptés à la modélisation statistique des extrêmes spatiaux mais leur estimation s’avère délicate. En effet, la densité multivariée n’a pas de forme explicite et les méthodes d’estimation standards liées à la vraisemblance ne peuvent donc pas être appliquées. Sous des hypothèses adéquates, notre estimateur est efficace quand le nombre d’observations temporelles et le nombre de simulations tendent vers l’infini. Cette approche par simulation peut être utilisée pour de nombreuses classes de processus max-stables et peut fournir de meilleurs résultats que les méthodes actuelles utilisant la vraisemblance composite, notamment dans le cas où seules quelques observations temporelles sont disponibles et où la dépendance spatiale est important
    • …
    corecore