22 research outputs found

    Selecting the forgetting factor in subset autoregressive modelling

    No full text
    Conventional methods to determine the forgetting factors in autoregressive (AR) models are mostly based on arbitrary or personal choices. In this paper, we present two procedures which can be used to select the forgetting factor in subset AR modelling. The first procedure uses the bootstrap to determine the value of a fixed forgetting factor. The second procedure starts from this base and applies the time-recursive maximum likelihood estimation to a variable forgetting factor. In one illustration using real exchange rates, we demonstrate the effect of the forgetting factor in subset AR modelling on ex ante forecasting of non-stationary time series. In a second illustration, these two procedures are applied to time-update forecasts for a stock market index. Subset AR models not including a forgetting factor act as a set of benchmarks for assessing ex ante forecasting performance, and consistently improved forecasting performance is demonstrated for these proposed procedures

    Linear-representation Based Estimation of Stochastic Volatility Models

    No full text
    A new way of estimating stochastic volatility models is developed. The method is based on the existence of autoregressive moving average (ARMA) representations for powers of the log-squared observations. These representations allow to build a criterion obtained by weighting the sums of squared innovations corresponding to the different ARMA models. The estimator obtained by minimizing the criterion with respect to the parameters of interest is shown to be consistent and asymptotically normal. Monte-Carlo experiments illustrate the finite sample properties of the estimator. The method has potential applications to other non-linear time-series models. Copyright 2006 Board of the Foundation of the Scandinavian Journal of Statistics..

    Least Squares Model Averaging

    No full text
    This paper considers the problem of selection of weights for averaging across least squares estimates obtained from a set of models. Existing model average methods are based on exponential Akaike information criterion (AIC) and Bayesian information criterion (BIC) weights. In distinction, this paper proposes selecting the weights by minimizing a Mallows criterion, the latter an estimate of the average squared error from the model average fit. We show that our new Mallows model average (MMA) estimator is asymptotically optimal in the sense of achieving the lowest possible squared error in a class of discrete model average estimators. In a simulation experiment we show that the MMA estimator compares favorably with those based on AIC and BIC weights. The proof of the main result is an application of the work of Li (1987). Copyright The Econometric Society 2007.

    Lag length estimation in large dimensional systems

    Get PDF
    We study the impact of the system dimension on commonly used model selection criteria (AIC, BIC, HQ) and LR based general to specific testing strategies for lag length estimation in VARs. We show that AIC's well known overparameterization feature becomes quickly irrelevant as we move away from univariate models, with the criterion leading to consistent estimates under sufficiently large system dimensions. Unless the sample size is unrealistically small, all model selection criteria will tend to point towards low orders as the system dimension increases, with the AIC remaining by far the best performing criterion. This latter point is also illustrated via the use of an analytical power function for model selection criteria. The comparison between the model selection and general to specific testing strategy is discussed within the context of a new penalty term leading to the same choice of lag length under both approaches
    corecore