147 research outputs found

    The limiting power of autocorrelation tests in regression models with linear restrictions

    Get PDF
    It is well known that the Durbin-Watson and several other tests for first-order autocorrelation have limiting power of either zero or one in a linear regression model without an intercept, and tend to a constant lying strictly between these values when an intercept term is present. This paper considers the limiting power of these tests in models with restricted coefficients. Surprisingly, it is found that with linear restrictions on the coefficients, the limiting power can still drop to zero even with the inclusion of an intercept in the regression. It is also shown that for regressions with valid restrictions, these test statistics have algebraic forms equivalent to the corresponding statistics in the unrestricted model.

    Optimal model averaging for single-index models with divergent dimensions

    Get PDF
    This paper offers a new approach to address the model uncertainty in (potentially) divergent-dimensional single-index models (SIMs). We propose a model-averaging estimator based on cross-validation, which allows the dimension of covariates and the number of candidate models to increase with the sample size. We show that when all candidate models are misspecified, our model-averaging estimator is asymptotically optimal in the sense that its squared loss is asymptotically identical to that of the infeasible best possible averaging estimator. In a different situation where correct models are available in the model set, the proposed weighting scheme assigns all weights to the correct models in the asymptotic sense. We also extend our method to average regularized estimators and propose pre-screening methods to deal with cases with high-dimensional covariates. We illustrate the merits of our method via simulations and two empirical applications.<br/

    Reducing Simulation Input-Model Risk via Input Model Averaging

    Get PDF
    Abstract. Input uncertainty is an aspect of simulation model risk that arises when the driving input distributions are derived or “fit” to real-world, historical data. Although there has been significant progress on quantifying and hedging against input uncertainty, there has been no direct attempt to reduce it via better input modeling. The meaning of “better” depends on the context and the objective: Our context is when (a) there are one or more families of parametric distributions that are plausible choices; (b) the real-world historical data are not expected to perfectly conform to any of them; and (c) our primary goal is to obtain higher-fidelity simulation output rather than to discover the “true” distribution. In this paper, we show that frequentist model averaging can be an effective way to create input models that better represent the true, unknown input distribution, thereby reducing model risk. Input model averaging builds from standard input modeling practice, is not computationally burdensome, requires no change in how the simulation is executed nor any follow-up experiments, and is available on the Comprehensive R Archive Network CRAN). We provide theoretical and empirical support for our approach

    Frequentist model averaging for threshold models

    Get PDF
    This paper develops a frequentist model averaging approach for threshold model specifications. The resulting estimator is proved to be asymptotically optimal in the sense of achieving the lowest possible squared errors. In particular, when com-bining estimators from threshold autoregressive models, this approach is also proved to be asymptotically optimal. Simulation results show that for the situation where the existing model averaging approach is not applicable, our proposed model averaging approach has a good performance; for the other situations, our proposed model aver-aging approach performs marginally better than other commonly used model selection and model averaging methods. An empirical application of our approach on the US unemployment data is given
    • 

    corecore