5 research outputs found

    Bootstrap Methods for Heavy-Tail or Autocorrelated Distributions with an Empirical Application

    Get PDF
    Chapter One: The Truncated Wild Bootstrap for the Asymmetric Infinite Variance Case The wild bootstrap method proposed by Cavaliere et al. (2013) to perform hypothesis testing for the location parameter in the location model, with errors in the domain of attraction of asymmetric stable law, is inappropriate. Hence, we are introducing a new bootstrap test procedure that overcomes the failure of Efron’s (1979) resampling bootstrap. This bootstrap test exploits the Wild Bootstrap of Cavaliere et al. (2013) and the central limit theorem of trimmed variables of Berkes et al. (2012) to deliver confidence sets with correct asymptotic coverage probabilities for asymmetric heavy-tailed data. The methodology of this bootstrap method entails locating cut-off values such that all data between these two values satisfy the central limit theorem conditions. Therefore, the proposed bootstrap will be termed the Truncated Wild Bootstrap (TWB) since it takes advantage of both findings. Simulation evidence to assess the quality of inference of available bootstrap tests for this particular model reveals that, on most occasions, the TWB performs better than the Parametric bootstrap (PB) of Cornea-Madeira & Davidson (2015). In addition, TWB test scheme is superior to the PB because this procedure can test the location parameter when the index of stability is below one, whereas the PB has no power in such a case. Moreover, the TWB is also superior to the PB when the tail index is close to 1 and the distribution is heavily skewed, unless the tail index is exactly 1 and the scale parameter is very high. Chapter Two: A frequency domain wild bootstrap for dependent data In this chapter a resampling method is proposed for a stationary dependent time series, based on Rademacher wild bootstrap draws from the Fourier transform of the data. The main distinguishing feature of our method is that the bootstrap draws share their periodogram identically with the sample, implying sound properties under dependence of arbitrary form. A drawback of the basic procedure is that the bootstrap distribution of the mean is degenerate. We show that a simple Gaussian augmentation overcomes this difficulty. Monte Carlo evidence indicates a favourable comparison with alternative methods in tests of location and significance in a regression model with autocorrelated shocks, and also of unit roots. Chapter 3: Frequency-based Bootstrap Methods for DC Pension Plan Strategy Evaluation The use of conventional bootstrap methods, such as Standard Bootstrap and Moving Block Bootstrap, to produce long run returns to rank one strategy over the others based on its associated reward and risk, might be misleading. Therefore, in this chapter, we will use a simple pension model that is mainly concerned with long-term accumulation wealth to assess, for the first time in pension literature, different bootstrap methods. We find that the Multivariate Fourier Bootstrap gives the most satisfactory result in its ability to mimic the true distribution using Cramer-von-mises statistics. We also address the disagreement in the pension literature on selecting the best pension plan strategy. We present a comprehensive study to compare different strategies using a different bootstrap procedures with different Cash-flow performance measures across a range of countries. We find that bootstrap methods play a critical role in determining the optimal strategy. Additionally, different CFP measures rank pension plans differently across countries and bootstrap methods.ESR

    Usage of Cholesky Decomposition in order to Decrease the Nonlinear Complexities of Some Nonlinear and Diversification Models and Present a Model in Framework of Mean-Semivariance for Portfolio Performance Evaluation

    No full text
    In order to get efficiency frontier and performance evaluation of portfolio, nonlinear models and DEA nonlinear (diversification) models are mostly used. One of the most fundamental problems of usage of nonlinear and diversification models is their computational complexity. Therefore, in this paper, a method is presented in order to decrease nonlinear complexities and simplify calculations of nonlinear and diversification models used from variance and covariance matrix. For this purpose, we use a linear transformation which is obtained from the Cholesky decomposition of covariance matrix and eliminate linear correlation among financial assets. In the following, variance is an appropriate criterion for the risk when distribution of stock returns is to be normal and symmetric as such a thing does not occur in reality. On the other hand, investors of the financial markets do not have an equal reaction to positive and negative exchanges of the stocks and show more desirability towards the positive exchanges and higher sensitivity to the negative exchanges. Therefore, we present a diversification model in the mean-semivariance framework which is based on the desirability or sensitivity of investor to positive and negative exchanges, and rate of this desirability or sensitivity can be controlled by use of a coefficient

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Proceedings of the 1991 Symposium on Systems Analysis in Forest Resources

    Full text link
    corecore