63 research outputs found

    Wild bootstrap of the mean in the infinite variance case

    Get PDF
    It is well known that the standard i.i.d. bootstrap of the mean is inconsistent in a location model with infinite variance (?-stable) innovations. This occurs because the bootstrap distribution of a normalised sum of infinite variance random variables tends to a random distribution. Consistent bootstrap algorithms based on subsampling methods have been proposed but have the drawback that they deliver much wider confidence sets than those generated by the i.i.d. bootstrap owing to the fact that they eliminate the dependence of the bootstrap distribution on the sample extremes. In this paper we propose sufficient conditions that allow a simple modification of the bootstrap (Wu, 1986, Ann.Stat.) to be consistent (in a conditional sense) yet to also reproduce the narrower confidence sets of the i.i.d. bootstrap. Numerical results demonstrate that our proposed bootstrap method works very well in practice delivering coverage rates very close to the nominal level and significantly narrower confidence sets than other consistent methodsBootstrap, distribuzioni stabili, misure di probabilitĂ  stocastiche, convergenza debole Bootstrap, stable distributions, random probability measures, weak convergence

    Exploiting infinite variance through Dummy Variables in non-stationary autoregressions

    Get PDF
    We consider estimation and testing infinite-order autoregressive models with a (near) unit root and infinite-variance innovations. We study the asymptotic properties of estimators obtained by dummying out large innovations, i.e., exceeding a given threshold. These estimators reflect the common practice of dealing with large residuals by including impulse dummies in the estimated regression. Iterative versions of the dummy-variable estimator are also discussed. We provide conditions on the preliminary parameter estimator and on the threshold which ensure that (i) the dummy-based estimator is consistent at higher rates than the OLS estimator, (ii) an asymptotically normal test statistic for the unit root hypothesis can be derived, and (iii) order of magnitude gains of local power are obtained

    Testing for unit roots in autoregressions with multiple level shifts

    Get PDF
    The asymptotic distributions of Augmented-Dickey-Fuller (ADF) unit root tests for autoregressive processes with a unit or near-unit root are discussed in the presence of multiple stochastic level shifts of large size occurring independently in time. The distributions depend on a Brownian motion and a Poisson-type jump process. Due to the latter, tests based on standard critical values experience power losses increasing rapidly with the number and the magnitude of the shifts. A new approach to unit root testing is suggested which requires no knowledge of either the location or the number of level shifts, and which dispenses with the assumption of independent shift occurrence. It is proposed to remove possible shifts from a time series by weighting its increments according to how likely it is, with respect to an ad hoc postulated distribution, a shift to have occurred in each period. If the number of level shifts is bounded in probability, the limiting distributions of the proposed test statistics coincide with those of ADF statistics under standard conditions. A Monte Carlo experiment shows that, despite their generality, the new tests perform well in finite samples

    A note of unit root testing in the presence of level shifts

    Get PDF
    In this note we discuss the properties of Augmented-Dickey-Fuller [ADF] unit root tests for autoregressive processes with a unit or near-unit root in the presence of multiple level shifts of large size. Due to the presence of level shifts, the ADF tests experience severe power losses. We consider new modified ADF unit root tests which require no knowledge of either the location or the number of level shifts. The tests are based on a two-step procedure where possible level shifts are initially detected using the level shift indicator estimators suggested by Chen and Tiao (1990, Journal of business and Economics Statistics) and Chen and Liu (1993, Journal of the American Statistical Association), and later removed by a novel procedure which is denoted as “de-jumping”. Using a Monte Carlo experiment we show that the new tests, although partially oversized in samples of moderate size, have much higher power than standard ADF tests

    Sieve-based inference for infinite-variance linear processes

    Get PDF
    We extend the available asymptotic theory for autoregressive sieve estimators to cover the case of stationary and invertible linear processes driven by independent identically distributed (i.i.d.) infinite variance (IV) innovations. We show that the ordinary least squares sieve estimates, together with estimates of the impulse responses derived from these, obtained from an autoregression whose order is an increasing function of the sample size, are consistent and exhibit asymptotic properties analogous to those which obtain for a finite-order autoregressive process driven by i.i.d. IV errors. As these limit distributions cannot be directly employed for inference because they either may not exist or, where they do, depend on unknown parameters, a second contribution of the paper is to investigate the usefulness of bootstrap methods in this setting. Focusing on three sieve bootstraps: the wild and permutation bootstraps, and a hybrid of the two, we show that, in contrast to the case of finite variance innovations, the wild bootstrap requires an infeasible correction to be consistent, whereas the other two bootstrap schemes are shown to be consistent (the hybrid for symmetrically distributed innovations) under general conditions

    Unit root inference for non-stationary linear processes driven by infinite variance innovations

    Get PDF
    The contribution of this paper is two-fold. First, we derive the asymptotic null distribution of the familiar augmented Dickey-Fuller [ADF] statistics in the case where the shocks follow a linear process driven by in…nite variance innovations. We show that these distributions are free of serial correlation nuisance parameters but depend on the tail index of the in…nite variance process. These distributions are shown to coincide with the corresponding results for the case where the shocks follow a …nite autoregression, provided the lag length in the ADF regression satis…es the same o(T1=3) rate condition as is required in the …nite variance case. In addition, we establish the rates of consistency and (where they exist) the asymptotic distributions of the ordinary least squares sieve estimates from the ADF regression. Given the dependence of their null distributions on the unknown tail index, our second contribution is to explore sieve wild bootstrap implementations of the ADF tests. Under the assumption of symmetry, we demonstrate the asymptotic validity (bootstrap consistency) of the wild bootstrap ADF tests. This is done by establishing that (conditional on the data) the wild bootstrap ADF statistics attain the same limiting distribution as that of the original ADF statistics taken conditional on the magnitude of the innovations

    Sieve-based inference for infinite-variance linear processes

    Get PDF
    We extend the available asymptotic theory for autoregressive sieve estimators to cover the case of stationary and invertible linear processes driven by independent identically distributed (i.i.d.) infinite variance (IV) innovations. We show that the ordinary least squares sieve estimates, together with estimates of the impulse responses derived from these, obtained from an autoregression whose order is an increasing function of the sample size, are consistent and exhibit asymptotic properties analogous to those which obtain for a finite-order autoregressive process driven by i.i.d. IV errors. As these limit distributions cannot be directly employed for inference because they either may not exist or, where they do, depend on unknown parameters, a second contribution of the paper is to investigate the usefulness of bootstrap methods in this setting. Focusing on three sieve bootstraps: the wild and permutation bootstraps, and a hybrid of the two, we show that, in contrast to the case of finite variance innovations, the wild bootstrap requires an infeasible correction to be consistent, whereas the other two bootstrap schemes are shown to be consistent (the hybrid for symmetrically distributed innovations) under general conditions

    Wild bootstrap of the mean in the infinite variance case

    Get PDF
    It is well known that the standard i.i.d. bootstrap of the mean is inconsistent in a location model with infinite variance (alfa-stable) innovations. This occurs because the bootstrap distribution of a normalised sum of infinite variance random variables tends to a random distribution. Consistent bootstrap algorithms based on subsampling methods have been proposed but have the drawback that they deliver much wider confidence sets than those generated by the i.i.d. bootstrap owing to the fact that they eliminate the dependence of the bootstrap distribution on the sample extremes. In this paper we propose sufficient conditions that allow a simple modification of the bootstrap (Wu, 1986, Ann.Stat.) to be consistent (in a conditional sense) yet to also reproduce the narrower confidence sets of the i.i.d. bootstrap. Numerical results demonstrate that our proposed bootstrap method works very well in practice delivering coverage rates very close to the nominal level and significantly narrower confidence sets than other consistent methods

    Perceptual error optimization for Monte Carlo rendering

    Full text link
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods
    • …
    corecore