19,424 research outputs found

    Chain ladder method: Bayesian bootstrap versus classical bootstrap

    Full text link
    The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilising Markov chain Monte Carlo (MCMC), ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. The ABC methodology arises because we work in a distribution-free setting in which we make no parametric assumptions, meaning we can not evaluate the likelihood point-wise or in this case simulate directly from the likelihood model. The use of a bootstrap procedure allows us to generate samples from the intractable likelihood without the requirement of distributional assumptions, this is crucial to the ABC framework. The developed methodology is used to obtain the empirical distribution of the DFCL model parameters and the predictive distribution of the outstanding loss liabilities conditional on the observed claims. We then estimate predictive Bayesian capital estimates, the Value at Risk (VaR) and the mean square error of prediction (MSEP). The latter is compared with the classical bootstrap and credibility methods

    Model Assessment Tools for a Model False World

    Full text link
    A standard goal of model evaluation and selection is to find a model that approximates the truth well while at the same time is as parsimonious as possible. In this paper we emphasize the point of view that the models under consideration are almost always false, if viewed realistically, and so we should analyze model adequacy from that point of view. We investigate this issue in large samples by looking at a model credibility index, which is designed to serve as a one-number summary measure of model adequacy. We define the index to be the maximum sample size at which samples from the model and those from the true data generating mechanism are nearly indistinguishable. We use standard notions from hypothesis testing to make this definition precise. We use data subsampling to estimate the index. We show that the definition leads us to some new ways of viewing models as flawed but useful. The concept is an extension of the work of Davies [Statist. Neerlandica 49 (1995) 185--245].Comment: Published in at http://dx.doi.org/10.1214/09-STS302 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Simple Multiple Variance-Ratio Test Based on Ranks

    Get PDF
    Using Chow and Denning's arguments applied to the individual hypothesis test methodology of Wright (2000) I propose a multiple variance-ratio test based on ranks to investigate the hypothesis of no serial coorelation. This rank joint test can be exact if data are i.i.d.. Some Monte Carlo simulations show that its size distortions are small for observations obeying the martingale hypothesis while not being and i.i.d. process. Also, regarding size and power, it compares favorably with other popular tests.Random walk hypothesis ; non parametric test ; variance-ratio test

    A Manifesto for the Equifinality Thesis.

    Get PDF
    This essay discusses some of the issues involved in the identification and predictions of hydrological models given some calibration data. The reasons for the incompleteness of traditional calibration methods are discussed. The argument is made that the potential for multiple acceptable models as representations of hydrological and other environmental systems (the equifinality thesis) should be given more serious consideration than hitherto. It proposes some techniques for an extended GLUE methodology to make it more rigorous and outlines some of the research issues still to be resolved

    Bootstraping financial time series

    Get PDF
    It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate.Publicad

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years
    corecore