9,923 research outputs found

    On the Order of Magnitude of Sums of Negative Powers of Integrated Processes

    Get PDF
    The asymptotic behavior of expressions of the form t=1nf(rnxt)% \sum_{t=1}^{n}f(r_{n}x_{t}) where xtx_{t} is an integrated process, rnr_{n} is a sequence of norming constants, and ff is a measurable function has been the subject of a number of articles in recent years. We mention Borodin and Ibragimov (1995), Park and Phillips (1999), de Jong (2004), Jeganathan (2004), P\"{o}tscher (2004), de Jong and Whang (2005), Berkes and Horvath (2006), and Christopeit (2009) which study weak convergence results for such expressions under various conditions on xtx_{t} and the function ff. Of course, these results also provide information on the order of magnitude of t=1nf(rnxt)% \sum_{t=1}^{n}f(r_{n}x_{t}). However, to the best of our knowledge no result is available for the case where ff is non-integrable with respect to Lebesgue-measure in a neighborhood of a given point, say x=0x=0. In this paper we are interested in bounds on the order of magnitude of t=1nxtα% \sum_{t=1}^{n}|x_{t}| ^{-\alpha} when α1\alpha \geq 1, a case where the implied function ff is not integrable in any neighborhood of zero. More generally, we shall also obtain bounds on the order of magnitude for t=1nvtxtα\sum_{t=1}^{n}v_{t}|x_{t}| ^{-\alpha} where vtv_{t} are random variables satisfying certain conditions

    The distribution of model averaging estimators and an impossibility result regarding its estimation

    Get PDF
    The finite-sample as well as the asymptotic distribution of Leung and Barron's (2006) model averaging estimator are derived in the context of a linear regression model. An impossibility result regarding the estimation of the finite-sample distribution of the model averaging estimator is obtained.Comment: Published at http://dx.doi.org/10.1214/074921706000000987 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonlinear Functions and Convergence to Brownian Motion: Beyond the Continuous Mapping Theorem

    Get PDF
    Weak convergence results for sample averages of nonlinear functions of (discrete-time) stochastic processes satisfying a functional central limit theorem (e.g., integrated processes) are given. These results substantially extend recent work by Park and Phillips (1999) and de Jong (2001), in that a much wider class of functions is covered. For example, some of the results hold for the class of all locally integrable functions, thus avoiding any of the various regularity conditions imposed on the functions in Park and Phillips (1999) or de Jong (2001).

    How Reliable are Bootstrap-based Heteroskedasticity Robust Tests?

    Full text link
    We develop theoretical finite-sample results concerning the size of wild bootstrap-based heteroskedasticity robust tests in linear regression models. In particular, these results provide an efficient diagnostic check, which can be used to weed out tests that are unreliable for a given testing problem in the sense that they overreject substantially. This allows us to assess the reliability of a large variety of wild bootstrap-based tests in an extensive numerical study.Comment: 59 pages, 1 figur

    Can one estimate the conditional distribution of post-model-selection estimators?

    Full text link
    We consider the problem of estimating the conditional distribution of a post-model-selection estimator where the conditioning is on the selected model. The notion of a post-model-selection estimator here refers to the combined procedure resulting from first selecting a model (e.g., by a model selection criterion such as AIC or by a hypothesis testing procedure) and then estimating the parameters in the selected model (e.g., by least-squares or maximum likelihood), all based on the same data set. We show that it is impossible to estimate this distribution with reasonable accuracy even asymptotically. In particular, we show that no estimator for this distribution can be uniformly consistent (not even locally). This follows as a corollary to (local) minimax lower bounds on the performance of estimators for this distribution. Similar impossibility results are also obtained for the conditional distribution of linear functions (e.g., predictors) of the post-model-selection estimator.Comment: Published at http://dx.doi.org/10.1214/009053606000000821 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Confidence Sets Based on Penalized Maximum Likelihood Estimators in Gaussian Regression

    Full text link
    Confidence intervals based on penalized maximum likelihood estimators such as the LASSO, adaptive LASSO, and hard-thresholding are analyzed. In the known-variance case, the finite-sample coverage properties of such intervals are determined and it is shown that symmetric intervals are the shortest. The length of the shortest intervals based on the hard-thresholding estimator is larger than the length of the shortest interval based on the adaptive LASSO, which is larger than the length of the shortest interval based on the LASSO, which in turn is larger than the standard interval based on the maximum likelihood estimator. In the case where the penalized estimators are tuned to possess the `sparsity property', the intervals based on these estimators are larger than the standard interval by an order of magnitude. Furthermore, a simple asymptotic confidence interval construction in the `sparse' case, that also applies to the smoothly clipped absolute deviation estimator, is discussed. The results for the known-variance case are shown to carry over to the unknown-variance case in an appropriate asymptotic sense.Comment: second revision: new title, some comments added, proofs moved to appendi

    Non-Parametric Maximum Likelihood Density Estimation and Simulation-Based Minimum Distance Estimators

    Get PDF
    Indirect inference estimators (i.e., simulation-based minimum distance estimators) in a parametric model that are based on auxiliary non-parametric maximum likelihood density estimators are shown to be asymptotically normal. If the parametric model is correctly specified, it is furthermore shown that the asymptotic variance-covariance matrix equals the inverse of the Fisher-information matrix. These results are based on uniform-in-parameters convergence rates and a uniform-in-parameters Donsker-type theorem for non-parametric maximum likelihood density estimators.Comment: minor corrections, some discussion added, some material remove

    Testing in the Presence of Nuisance Parameters: Some Comments on Tests Post-Model-Selection and Random Critical Values

    Get PDF
    We point out that the ideas underlying some test procedures recently proposed for testing post-model-selection (and for some other test problems) in the econometrics literature have been around for quite some time in the statistics literature. We also sharpen some of these results in the statistics literature. Furthermore, we show that some intuitively appealing testing procedures, that have found their way into the econometrics literature, lead to tests that do not have desirable size properties, not even asymptotically.Comment: Minor revision. Some typos and errors corrected, some references adde

    Efficient Simulation-Based Minimum Distance Estimation and Indirect Inference

    Get PDF
    Given a random sample from a parametric model, we show how indirect inference estimators based on appropriate nonparametric density estimators (i.e., simulation-based minimum distance estimators) can be constructed that, under mild assumptions, are asymptotically normal with variance-covarince matrix equal to the Cramer-Rao bound.Comment: Minor revision, some references and remarks adde
    corecore