180,399 research outputs found

    Semi-nonparametric Estimation of Operational Risk Capital with Extreme Loss Events

    Full text link
    Bank operational risk capital modeling using the Basel II advanced measurement approach (AMA) often lead to a counter-intuitive capital estimate of value at risk at 99.9% due to extreme loss events. To address this issue, a flexible semi-nonparametric (SNP) model is introduced using the change of variables technique to enrich the family of distributions to handle extreme loss events. The SNP models are proved to have the same maximum domain of attraction (MDA) as the parametric kernels, and it follows that the SNP models are consistent with the extreme value theory peaks over threshold method but with different shape and scale parameters from the kernels. By using the simulation dataset generated from a mixture of distributions with both light and heavy tails, the SNP models in the Frechet and Gumbel MDAs are shown to fit the tail dataset satisfactorily through increasing the number of model parameters. The SNP model quantile estimates at 99.9 percent are not overly sensitive towards the body-tail threshold change, which is in sharp contrast to the parametric models. When applied to a bank operational risk dataset with three Basel event types, the SNP model provides a significant improvement in the goodness of fit to the two event types with heavy tails, yielding an intuitive capital estimate that is in the same magnitude as the event type total loss. Since the third event type does not have a heavy tail, the parametric model yields an intuitive capital estimate, and the SNP model cannot provide additional improvement. This research suggests that the SNP model may enable banks to continue with the AMA or its partial use to obtain an intuitive operational risk capital estimate when the simple non-model based Basic Indicator Approach or Standardized Approach are not suitable per Basel Committee Banking Supervision OPE10 (2019).Comment: There are 32 pages, including tables, figures, appendix and reference. The research was presented at the MATLAB Annual Computational Finance Conference, September 27-30, 202

    Bayesian threshold selection for extremal models using measures of surprise

    Full text link
    Statistical extreme value theory is concerned with the use of asymptotically motivated models to describe the extreme values of a process. A number of commonly used models are valid for observed data that exceed some high threshold. However, in practice a suitable threshold is unknown and must be determined for each analysis. While there are many threshold selection methods for univariate extremes, there are relatively few that can be applied in the multivariate setting. In addition, there are only a few Bayesian-based methods, which are naturally attractive in the modelling of extremes due to data scarcity. The use of Bayesian measures of surprise to determine suitable thresholds for extreme value models is proposed. Such measures quantify the level of support for the proposed extremal model and threshold, without the need to specify any model alternatives. This approach is easily implemented for both univariate and multivariate extremes.Comment: To appear in Computational Statistics and Data Analysi

    Modeling for seasonal marked point processes: An analysis of evolving hurricane occurrences

    Full text link
    Seasonal point processes refer to stochastic models for random events which are only observed in a given season. We develop nonparametric Bayesian methodology to study the dynamic evolution of a seasonal marked point process intensity. We assume the point process is a nonhomogeneous Poisson process and propose a nonparametric mixture of beta densities to model dynamically evolving temporal Poisson process intensities. Dependence structure is built through a dependent Dirichlet process prior for the seasonally-varying mixing distributions. We extend the nonparametric model to incorporate time-varying marks, resulting in flexible inference for both the seasonal point process intensity and for the conditional mark distribution. The motivating application involves the analysis of hurricane landfalls with reported damages along the U.S. Gulf and Atlantic coasts from 1900 to 2010. We focus on studying the evolution of the intensity of the process of hurricane landfall occurrences, and the respective maximum wind speed and associated damages. Our results indicate an increase in the number of hurricane landfall occurrences and a decrease in the median maximum wind speed at the peak of the season. Introducing standardized damage as a mark, such that reported damages are comparable both in time and space, we find that there is no significant rising trend in hurricane damages over time.Comment: Published at http://dx.doi.org/10.1214/14-AOAS796 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Statistical methods for weather-related insurance claims

    Get PDF
    Severe weather events, for instance, heavy rainfall, snow-melt or droughts, cause large losses of lives and money every year. Insurance companies offer some form of protection against such undesirable outcomes, and decision makers want to take precautions to prevent future catastrophes. Both, decision makers and insurance companies, are hence interested to understand which weather events induce a high risk. This information then allows the insurance companies to set premiums for their policies by predicting future losses. Further, the relationship between damages and weather is also important to assess the impact of climate change. Several aspects have to be considered in the statistical modelling of this relationship. For instance, some regions in the world are more used to severe rainfall events than others and, hence, presumably less vulnerable to small amounts of rainfall than others. Spatial statistics provides a statistical framework which allows for a spatially varying relationship while accounting for certain similarities for areas which are geographically close. Further, damages, especially large losses, are rather rare and the statistical analysis is hence usually based on a low number of observations. Methods from extreme value theory consider the modelling of such events and may hence be beneficial. This thesis aims to develop statistical models for the relationship between damages, in particular property insurance claims, and weather events, based on daily Norwegian insurance and weather data. To improve existing models, new methodology is introduced which allows for substantial flexibility of the statistical model. The risk induced by certain weather events is assumed to be spatially varying across Norway but with neighbouring regions exhibiting similar vulnerability. To account for certain non-linear effects, the class of monotonic regression functions is considered. Specifically, this work is the first to de- fine flexible dependence structures for such functions. In particular, the first approach considers a Bayesian framework and estimates are obtained by Markov chain Monte Carlo algorithms while the second approach is optimization-based. The last part of the thesis derives extreme value models for discrete data and estimates them in a Bayesian framework. In particular, a mixture model which allows for a flexible tail behaviour is motivated by an exploratory analysis of the highest claims in the data. Additionally, the data are restructured based on spatial and temporal patterns and then combined with the proposed extreme value mixture model. All these approaches, monotonic regression and extreme value analysis, lead to an improved model fit and a better understanding of the relationship between insurance claims and weather events

    Convex mixture regression for quantitative risk assessment

    Get PDF
    There is wide interest in studying how the distribution of a continuous response changes with a predictor. We are motivated by environmental applications in which the predictor is the dose of an exposure and the response is a health outcome. A main focus in these studies is inference on dose levels associated with a given increase in risk relative to a baseline. In addressing this goal, popular methods either dichotomize the continuous response or focus on modeling changes with the dose in the expectation of the outcome. Such choices may lead to information loss and provide inaccurate inference on dose-response relationships. We instead propose a Bayesian convex mixture regression model that allows the entire distribution of the health outcome to be unknown and changing with the dose. To balance flexibility and parsimony, we rely on a mixture model for the density at the extreme doses, and express the conditional density at each intermediate dose via a convex combination of these extremal densities. This representation generalizes classical dose-response models for quantitative outcomes, and provides a more parsimonious, but still powerful, formulation compared to nonparametric methods, thereby improving interpretability and efficiency in inference on risk functions. A Markov chain Monte Carlo algorithm for posterior inference is developed, and the benefits of our methods are outlined in simulations, along with a study on the impact of dde exposure on gestational age
    • …
    corecore