191 research outputs found

    Functional laws for trimmed LĂ©vy processes

    Get PDF
    Two different ways of trimming the sample path of a stochastic process in [0, 1]: global ('trim as you go') trimming and record time ('lookback') trimming are analysed to find conditions for the corresponding operators to be continuous with respect to the (strong) J 1-topology. A key condition is that there should be no ties among the largest ordered jumps of the limit process. As an application of the theory, via the continuous mapping theorem, we prove limit theorems for trimmed Lévy processes, using the functional convergence of the underlying process to a stable process. The results are applied to a reinsurance ruin time problem.B. Buchmann and R. Maller’s research was partially funded by the Australian Research Council (ARC) (grant numbers DP1092502 and DP160104737). Y. Ipsen was formerly at the ARC Centre of Excellence for Mathematical and Statistical Frontiers, School of Mathematics and Statistics, University of Melbourne. She acknowledges support from the ARC

    Robust and efficient estimation of the shape parameter of alpha-stable distributions

    Get PDF
    In this paper we consider robust and efficient estimators of the shape parameter of symmetric alpha-stable distributions obtained by using the Minimum Density Power Divergence method introduced in Basu, Harris, Hjort and Jones (1998). We established their high asymptotic efficiency and verified these results in simulations. The functionals corresponding to the estimators have bounded influence functions and simulations confirm their robustness when the sample distribution is in a vicinity of the model distribution. The simulations also show that the Minimum Density Power Divergence Estimators (MDPDEs) of the shape parameter of alpha-stable distributions have superior performance over other existing estimators. The high efficiency combined with robustness of the MDPDEs in estimating the shape parameter of alpha-stable distributions make them an attractive alternative to the preceding estimation procedures considered in the literature

    Bootstrap Methods for Heavy-Tail or Autocorrelated Distributions with an Empirical Application

    Get PDF
    Chapter One: The Truncated Wild Bootstrap for the Asymmetric Infinite Variance Case The wild bootstrap method proposed by Cavaliere et al. (2013) to perform hypothesis testing for the location parameter in the location model, with errors in the domain of attraction of asymmetric stable law, is inappropriate. Hence, we are introducing a new bootstrap test procedure that overcomes the failure of Efron’s (1979) resampling bootstrap. This bootstrap test exploits the Wild Bootstrap of Cavaliere et al. (2013) and the central limit theorem of trimmed variables of Berkes et al. (2012) to deliver confidence sets with correct asymptotic coverage probabilities for asymmetric heavy-tailed data. The methodology of this bootstrap method entails locating cut-off values such that all data between these two values satisfy the central limit theorem conditions. Therefore, the proposed bootstrap will be termed the Truncated Wild Bootstrap (TWB) since it takes advantage of both findings. Simulation evidence to assess the quality of inference of available bootstrap tests for this particular model reveals that, on most occasions, the TWB performs better than the Parametric bootstrap (PB) of Cornea-Madeira & Davidson (2015). In addition, TWB test scheme is superior to the PB because this procedure can test the location parameter when the index of stability is below one, whereas the PB has no power in such a case. Moreover, the TWB is also superior to the PB when the tail index is close to 1 and the distribution is heavily skewed, unless the tail index is exactly 1 and the scale parameter is very high. Chapter Two: A frequency domain wild bootstrap for dependent data In this chapter a resampling method is proposed for a stationary dependent time series, based on Rademacher wild bootstrap draws from the Fourier transform of the data. The main distinguishing feature of our method is that the bootstrap draws share their periodogram identically with the sample, implying sound properties under dependence of arbitrary form. A drawback of the basic procedure is that the bootstrap distribution of the mean is degenerate. We show that a simple Gaussian augmentation overcomes this difficulty. Monte Carlo evidence indicates a favourable comparison with alternative methods in tests of location and significance in a regression model with autocorrelated shocks, and also of unit roots. Chapter 3: Frequency-based Bootstrap Methods for DC Pension Plan Strategy Evaluation The use of conventional bootstrap methods, such as Standard Bootstrap and Moving Block Bootstrap, to produce long run returns to rank one strategy over the others based on its associated reward and risk, might be misleading. Therefore, in this chapter, we will use a simple pension model that is mainly concerned with long-term accumulation wealth to assess, for the first time in pension literature, different bootstrap methods. We find that the Multivariate Fourier Bootstrap gives the most satisfactory result in its ability to mimic the true distribution using Cramer-von-mises statistics. We also address the disagreement in the pension literature on selecting the best pension plan strategy. We present a comprehensive study to compare different strategies using a different bootstrap procedures with different Cash-flow performance measures across a range of countries. We find that bootstrap methods play a critical role in determining the optimal strategy. Additionally, different CFP measures rank pension plans differently across countries and bootstrap methods.ESR

    The Effects of Largest Claim and Excess of Loss Reinsurance on a Company's Ruin Time and Valuation

    Get PDF
    We compare two types of reinsurance: excess of loss (EOL) and largest claim reinsurance (LCR), each of which transfers the payment of part, or all, of one or more large claims from the primary insurance company (the cedant) to a reinsurer. The primary insurer’s point of view is documented in terms of assessment of risk and payment of reinsurance premium. A utility indifference rationale based on the expected future dividend stream is used to value the company with and without reinsurance. Assuming the classical compound Poisson risk model with choices of claim size distributions (classified as heavy, medium and light-tailed cases), simulations are used to illustrate the impact of the EOL and LCR treaties on the company’s ruin probability, ruin time and value as determined by the dividend discounting model. We find that LCR is at least as effective as EOL in averting ruin in comparable finite time horizon settings. In instances where the ruin probability for LCR is smaller than for EOL, the dividend discount model shows that the cedant is able to pay a larger portion of the dividend for LCR reinsurance than for EOL while still maintaining company value. Both methods reduce risk considerably as compared with no reinsurance, in a variety of situations, as measured by the standard deviation of the company value. A further interesting finding is that heaviness of tails alone is not necessarily the decisive factor in the possible ruin of a company; small and moderate sized claims can also play a significant role in this

    Measuring productivity dispersion: a parametric approach using the LĂ©vy alpha-stable distribution

    Get PDF
    It is well-known that value added per worker is extremely heterogeneous among firms, but relatively little has been done to characterize this heterogeneity more precisely. Here we show that the distribution of value-added per worker exhibits heavy tails, a very large support, and consistently features a proportion of negative values, which prevents log transformation. We propose to model the distribution of value added per worker using the four parameter LĂ©vy stable distribution, a natural candidate deriving from the Generalised Central Limit Theorem, and we show that it is a better fit than key alternatives. Fitting a distribution allows us to capture dispersion through the tail exponent and scale parameters separately. We show that these parametric measures of dispersion are at least as useful as interquantile ratios, through case studies on the evolution of dispersion in recent years and the correlation between dispersion and intangible capital intensity

    Robust Risk Management in the Context of Solvency II Regulations

    Get PDF
    We start by defining the general notions of “risk” and “uncertainty” and by discussing the risk management process, in particular in a financial and insurance context. We see that robustness can be derived as a necessary property of risk management procedures from these definitions. In practice, however, regulatory requirements are of highest importance to insurance companies. Therefore, we discuss the upcoming Solvency II regulations for the European insurance industry. Again, we focus on their implications for the use of robust quantitative methods in financial risk management. Next, we consider the ingredients that we need for a robust quantitative risk management process. The first element are probability distances. We discuss definitions, properties, and examples, the main one being the Wasserstein metric. Probability distances are a prerequisite for obtaining many of the results in robust statistics. Before applying the robustness results, we discuss axiomatic approaches for risk measures, on probability spaces as well as on data. Finally, we combine all ingredients into the risk management procedure. Additionally, we discuss several—in particular simulation-based—approaches for the computation of the Solvency II capital requirement and set up a mathematical framework for the introduction of a new algorithm. We conduct an empirical study of its performance

    Ocean currents promote rare species diversity in protists

    Get PDF
    Oceans host communities of plankton composed of relatively few abundant species and many rare species. The number of rare protist species in these communities, as estimated in metagenomic studies, decays as a steep power law of their abundance. The ecological factors at the origin of this pattern remain elusive. We propose that chaotic advection by oceanic currents affects biodiversity patterns of rare species. To test this hypothesis, we introduce a spatially explicit coalescence model that reconstructs the species diversity of a sample of water. Our model predicts, in the presence of chaotic advection, a steeper power law decay of the species abundance distribution and a steeper increase of the number of observed species with sample size. A comparison of metagenomic studies of planktonic protist communities in oceans and in lakes quantitatively confirms our prediction. Our results support that oceanic currents positively affect the diversity of rare aquatic microbes
    • …
    corecore