74,829 research outputs found

    Asymptotic refinements of bootstrap tests in a linear regression model ; A CHM bootstrap using the first four moments of the residuals

    Get PDF
    We consider linear regression models and we suppose that disturbances are either Gaussian or non Gaussian. Then, by using Edgeworth expansions, we compute the exact errors in the rejection probability (ERPs) for all one-restriction tests (asymptotic and bootstrap) which can occur in these linear models. More precisely, we show that the ERP is the same for the asymptotic test as for the classical parametric bootstrap test it is based on as soon as the third cumulant is nonnul. On the other side, the non parametric bootstrap performs almost always better than the parametric bootstrap. There are two exceptions. The first occurs when the third and fourth cumulants are null, in this case parametric and non parametric bootstrap provide exactly the same ERPs, the second occurs when we perform a t-test or its associated bootstrap (parametric or not) in the models y =Îź+u and y=ax+u where the disturbances have nonnull kurtosis coefficient and a skewness coefficient equal to zero. In that case, the ERPs of any test (asymptotic or bootstrap) we perform are of the same order.Finally, we provide a new parametric bootstrap using the first four moments of the distribution of the residuals which is as accurate as a non parametric bootstrap which uses these first four moments implicitly. We will introduce it as the parametric bootstrap considering higher moments (CHM), and thus, we will speak about the CHM parametric bootstrapNon parametric bootstrap, Parametric Bootstrap, Cumulants, Skewness, kurtosis.

    Higher-order Improvements of the Parametric Bootstrap for Markov Processes

    Get PDF
    This paper provides bounds on the errors in coverage probabilities of maximum likelihood-based, percentile-t, parametric bootstrap confidence intervals for Markov time series processes. These bounds show that the parametric bootstrap for Markov time series provides higher-order improvements (over confidence intervals based on first order asymptotics) that are comparable to those obtained by the parametric and nonparametric bootstrap for iid data and are better than those obtained by the block bootstrap for time series. Additional results are given for Wald-based confidence regions. The paper also shows that k-step parametric bootstrap confidence intervals achieve the same higher-order improvements as the standard parametric bootstrap for Markov processes. The k-step bootstrap confidence intervals are computationally attractive. They circumvent the need to compute a nonlinear optimization for each simulated bootstrap sample. The latter is necessary to implement the standard parametric bootstrap when the maximum likelihood estimator solves a nonlinear optimization problem.Asymptotics, Edgeworth expansion, Gauss-Newton, k-step bootstrap, maximum likelihood estimator, Newton-Raphson, parametric bootstrap, t statistic

    A comparison of block and semi-parametric bootstrap methods for variance estimation in spatial statistics

    Get PDF
    Efron (1979) introduced the bootstrap method for independent data but it cannot be easily applied to spatial data because of their dependency. For spatial data that are correlated in terms of their locations in the underlying space the moving block bootstrap method is usually used to estimate the precision measures of the estimators. The precision of the moving block bootstrap estimators is related to the block size which is difficult to select. In the moving block bootstrap method also the variance estimator is underestimated. In this paper, first the semi-parametric bootstrap is used to estimate the precision measures of estimators in spatial data analysis. In the semi-parametric bootstrap method, we use the estimation of the spatial correlation structure. Then, we compare the semi-parametric bootstrap with a moving block bootstrap for variance estimation of estimators in a simulation study. Finally, we use the semi-parametric bootstrap to analyze the coal-ash data

    Bias Reduction of Long Memory Parameter Estimators via the Pre-filtered Sieve Bootstrap

    Full text link
    This paper investigates the use of bootstrap-based bias correction of semi-parametric estimators of the long memory parameter in fractionally integrated processes. The re-sampling method involves the application of the sieve bootstrap to data pre-filtered by a preliminary semi-parametric estimate of the long memory parameter. Theoretical justification for using the bootstrap techniques to bias adjust log-periodogram and semi-parametric local Whittle estimators of the memory parameter is provided. Simulation evidence comparing the performance of the bootstrap bias correction with analytical bias correction techniques is also presented. The bootstrap method is shown to produce notable bias reductions, in particular when applied to an estimator for which analytical adjustments have already been used. The empirical coverage of confidence intervals based on the bias-adjusted estimators is very close to the nominal, for a reasonably large sample size, more so than for the comparable analytically adjusted estimators. The precision of inferences (as measured by interval length) is also greater when the bootstrap is used to bias correct rather than analytical adjustments.Comment: 38 page

    AIDS VERSUS THE ROTTERDAM DEMAND SYSTEM: A COX TEST WITH PARAMETRIC BOOTSTRAP

    Get PDF
    A Cox test with parametric bootstrap is developed to select between the linearized version of the First-Difference Almost Ideal Demand System (FDAIDS) and the Rotterdam model. A Cox test with parametric bootstrap has been shown to be more powerful than encompassing tests like those used in past research. The bootstrap approach is used with U.S. meat demand (beef, pork, chicken, fish) and compared to results obtained with an encompassing test. The Cox test with parametric bootstrap consistently indicates the Rotterdam model is preferred to the FDAIDS, while the encompassing test sometimes fails to reject FDAIDS.Research Methods/ Statistical Methods,

    Bootstrap tests for the error distribution in linear and nonparametric regression models

    Get PDF
    In this paper we investigate several tests for the hypothesis of a parametric form of the error distribution in the common linear and nonparametric regression model, which are based on empirical processes of residuals. It is well known that tests in this context are not asymptotically distribution-free and the parametric bootstrap is applied to deal with this problem. The performance of the resulting bootstrap test is investigated from an asymptotic point of view and by means of a simulation study. The results demonstrate that even for moderate sample sizes the parametric bootstrap provides a reliable and easy accessible solution to the problem of goodness-of-fit testing of assumptions regarding the error distribution in linear and nonparametric regression models. --goodness-of-fit,residual process,parametric bootstrap,linear model,analysis of variance,M-estimation,nonparametric regression

    AIDS VERSUS ROTTERDAM: A COX NONNESTED TEST WITH PARAMETRIC BOOTSTRAP

    Get PDF
    A Cox nonnested test with parametric bootstrap is developed to select between the linearized version of the First Difference Almost Ideal Demand System (FDAIDS) and the Rotterdam model. The Cox test with parametric bootstrap is expected to be more powerful than the various orthodox tests used in past research. The new approach is then used for U. S. meat demand (beef, pork, and chicken) and compared to results obtained with an orthodox test. The orthodox test gives inconsistent results. In contrast, under the same varied conditions, the Cox test with parametric bootstrap consistently indicates that the Rotterdam model is preferred to the FDAIDS.Demand and Price Analysis,

    Goodness-of-fit testing based on a weighted bootstrap: A fast large-sample alternative to the parametric bootstrap

    Full text link
    The process comparing the empirical cumulative distribution function of the sample with a parametric estimate of the cumulative distribution function is known as the empirical process with estimated parameters and has been extensively employed in the literature for goodness-of-fit testing. The simplest way to carry out such goodness-of-fit tests, especially in a multivariate setting, is to use a parametric bootstrap. Although very easy to implement, the parametric bootstrap can become very computationally expensive as the sample size, the number of parameters, or the dimension of the data increase. An alternative resampling technique based on a fast weighted bootstrap is proposed in this paper, and is studied both theoretically and empirically. The outcome of this work is a generic and computationally efficient multiplier goodness-of-fit procedure that can be used as a large-sample alternative to the parametric bootstrap. In order to approximately determine how large the sample size needs to be for the parametric and weighted bootstraps to have roughly equivalent powers, extensive Monte Carlo experiments are carried out in dimension one, two and three, and for models containing up to nine parameters. The computational gains resulting from the use of the proposed multiplier goodness-of-fit procedure are illustrated on trivariate financial data. A by-product of this work is a fast large-sample goodness-of-fit procedure for the bivariate and trivariate t distribution whose degrees of freedom are fixed.Comment: 26 pages, 5 tables, 1 figur

    Bootstrap confidence sets under model misspecification

    Get PDF
    A multiplier bootstrap procedure for construction of likelihood-based confidence sets is considered for finite samples and a possible model misspecification. Theoretical results justify the bootstrap validity for a small or moderate sample size and allow to control the impact of the parameter dimension pp: the bootstrap approximation works if p3/np^3/n is small. The main result about bootstrap validity continues to apply even if the underlying parametric model is misspecified under the so-called small modelling bias condition. In the case when the true model deviates significantly from the considered parametric family, the bootstrap procedure is still applicable but it becomes a bit conservative: the size of the constructed confidence sets is increased by the modelling bias. We illustrate the results with numerical examples for misspecified linear and logistic regressions.Comment: Published at http://dx.doi.org/10.1214/15-AOS1355 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore