282 research outputs found

    On the Order of Magnitude of Sums of Negative Powers of Integrated Processes

    Get PDF
    The asymptotic behavior of expressions of the form t=1nf(rnxt)% \sum_{t=1}^{n}f(r_{n}x_{t}) where xtx_{t} is an integrated process, rnr_{n} is a sequence of norming constants, and ff is a measurable function has been the subject of a number of articles in recent years. We mention Borodin and Ibragimov (1995), Park and Phillips (1999), de Jong (2004), Jeganathan (2004), P\"{o}tscher (2004), de Jong and Whang (2005), Berkes and Horvath (2006), and Christopeit (2009) which study weak convergence results for such expressions under various conditions on xtx_{t} and the function ff. Of course, these results also provide information on the order of magnitude of t=1nf(rnxt)% \sum_{t=1}^{n}f(r_{n}x_{t}). However, to the best of our knowledge no result is available for the case where ff is non-integrable with respect to Lebesgue-measure in a neighborhood of a given point, say x=0x=0. In this paper we are interested in bounds on the order of magnitude of t=1nxtα% \sum_{t=1}^{n}|x_{t}| ^{-\alpha} when α1\alpha \geq 1, a case where the implied function ff is not integrable in any neighborhood of zero. More generally, we shall also obtain bounds on the order of magnitude for t=1nvtxtα\sum_{t=1}^{n}v_{t}|x_{t}| ^{-\alpha} where vtv_{t} are random variables satisfying certain conditions

    Efficient Simulation-Based Minimum Distance Estimation and Indirect Inference

    Get PDF
    Given a random sample from a parametric model, we show how indirect inference estimators based on appropriate nonparametric density estimators (i.e., simulation-based minimum distance estimators) can be constructed that, under mild assumptions, are asymptotically normal with variance-covarince matrix equal to the Cramer-Rao bound.Comment: Minor revision, some references and remarks adde

    Nonlinear Functions and Convergence to Brownian Motion: Beyond the Continuous Mapping Theorem

    Get PDF
    Weak convergence results for sample averages of nonlinear functions of (discrete-time) stochastic processes satisfying a functional central limit theorem (e.g., integrated processes) are given. These results substantially extend recent work by Park and Phillips (1999) and de Jong (2001), in that a much wider class of functions is covered. For example, some of the results hold for the class of all locally integrable functions, thus avoiding any of the various regularity conditions imposed on the functions in Park and Phillips (1999) or de Jong (2001).

    Lower Risk Bounds and Properties of Confidence Sets For Ill-Posed Estimation Problems with Applications to Spectral Density and Persistence Estimation, Unit Roots,and Estimation of Long Memory Parameters

    Get PDF
    Important estimation problems in econometrics like estimation the value of a spectral density at frequency zero, which appears in the econometrics literature in the guises of heteroskedasticity and autocorrelation consistent variance estimation and long run variance estimation, are shown to be "ill-posed" estimation problems. A prototypical result obtained in the paper is that the minimax risk for estimation the value of the spectral density at frequency zero is infinite regardless of sample size, and that confidence sets are close to being univormative. In this result the maximum risk is over commonly used specifications for the set of feasible data generating processes. The consequences for inference on unit roots and cointegrating are discussed. Similar results for persistence estimation and estimation of the long memory parameter are given. All these results are obtained as special cases of a more general theory developed for abstract estimation problems, which readily also allows for the treatment of other ill-posed estimation problems such as, e. g. nonparametric regression of density estimation.

    On the Power of Invariant Tests for Hypotheses on a Covariance Matrix

    Get PDF
    The behavior of the power function of autocorrelation tests such as the Durbin-Watson test in time series regressions or the Cliff-Ord test in spatial regression models has been intensively studied in the literature. When the correlation becomes strong, Kr\"amer (1985) (for the Durbin-Watson test) and Kr\"amer (2005) (for the Cliff-Ord test) have shown that the power can be very low, in fact can converge to zero, under certain circumstances. Motivated by these results, Martellosio (2010) set out to build a general theory that would explain these findings. Unfortunately, Martellosio (2010) does not achieve this goal, as a substantial portion of his results and proofs suffer from serious flaws. The present paper now builds a theory as envisioned in Martellosio (2010) in a fairly general framework, covering general invariant tests of a hypothesis on the disturbance covariance matrix in a linear regression model. The general results are then specialized to testing for spatial correlation and to autocorrelation testing in time series regression models. We also characterize the situation where the null and the alternative hypothesis are indistinguishable by invariant tests

    Can one estimate the conditional distribution of post-model-selection estimators?

    Full text link
    We consider the problem of estimating the conditional distribution of a post-model-selection estimator where the conditioning is on the selected model. The notion of a post-model-selection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model selection criterion such as AIC or by a hypothesis testing procedure) and then estimating the parameters in the selected model (e.g., by least-squares or maximum likelihood), all based on the same data set. We show that it is impossible to estimate this distribution with reasonable accuracy even asymptotically. In particular, we show that no estimator for this distribution can be uniformly consistent (not even locally). This follows as a corollary to (local) minimax lower bounds on the performance of estimators for this distribution. Similar impossibility results are also obtained for the conditional distribution of linear functions (e.g., predictors) of the post-model-selection estimator.Comment: Published at http://dx.doi.org/10.1214/009053606000000821 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    How Reliable are Bootstrap-based Heteroskedasticity Robust Tests?

    Full text link
    We develop theoretical finite-sample results concerning the size of wild bootstrap-based heteroskedasticity robust tests in linear regression models. In particular, these results provide an efficient diagnostic check, which can be used to weed out tests that are unreliable for a given testing problem in the sense that they overreject substantially. This allows us to assess the reliability of a large variety of wild bootstrap-based tests in an extensive numerical study.Comment: 59 pages, 1 figur

    Further Results on Size and Power of Heteroskedasticity and Autocorrelation Robust Tests, with an Application to Trend Testing

    Full text link
    We complement the theory developed in Preinerstorfer and P\"otscher (2016) with further finite sample results on size and power of heteroskedasticity and autocorrelation robust tests. These allows us, in particular, to show that the sufficient conditions for the existence of size-controlling critical values recently obtained in P\"otscher and Preinerstorfer (2018) are often also necessary. We furthermore apply the results obtained to tests for hypotheses on deterministic trends in stationary time series regressions, and find that many tests currently used are strongly size-distorted.Comment: Revised version. Some restructuring, some errors corrected, new results adde

    Confidence Sets Based on Penalized Maximum Likelihood Estimators in Gaussian Regression

    Full text link
    Confidence intervals based on penalized maximum likelihood estimators such as the LASSO, adaptive LASSO, and hard-thresholding are analyzed. In the known-variance case, the finite-sample coverage properties of such intervals are determined and it is shown that symmetric intervals are the shortest. The length of the shortest intervals based on the hard-thresholding estimator is larger than the length of the shortest interval based on the adaptive LASSO, which is larger than the length of the shortest interval based on the LASSO, which in turn is larger than the standard interval based on the maximum likelihood estimator. In the case where the penalized estimators are tuned to possess the `sparsity property', the intervals based on these estimators are larger than the standard interval by an order of magnitude. Furthermore, a simple asymptotic confidence interval construction in the `sparse' case, that also applies to the smoothly clipped absolute deviation estimator, is discussed. The results for the known-variance case are shown to carry over to the unknown-variance case in an appropriate asymptotic sense.Comment: second revision: new title, some comments added, proofs moved to appendi
    corecore