4,080 research outputs found

    Adaptive Minnesota Prior for High-Dimensional Vector Autoregressions

    Get PDF
    We develop a novel, highly scalable estimation method for large Bayesian Vector Autoregressive models (BVARs) and employ it to introduce an "adaptive" version of the Minnesota prior. This flexible prior structure allows each coeffcient of the VAR to have its own shrinkage intensity, which is treated as an additional parameter and estimated from the data. Most importantly, our estimation procedure does not rely on computationally intensive Markov Chain Monte Carlo (MCMC) methods, making it suitable for high-dimensional VARs with more predictors that observations. We use a Monte Carlo study to demonstrate the accuracy and computational gains of our approach. We further illustrate the forecasting performance of our new approach by applying it to a quarterly macroeconomic dataset, and find that it forecasts better than both factor models and other existing BVAR methods

    Boosted p-Values for High-Dimensional Vector Autoregression

    Full text link
    Assessing the statistical significance of parameter estimates is an important step in high-dimensional vector autoregression modeling. Using the least-squares boosting method, we compute the p-value for each selected parameter at every boosting step in a linear model. The p-values are asymptotically valid and also adapt to the iterative nature of the boosting procedure. Our simulation experiment shows that the p-values can keep false positive rate under control in high-dimensional vector autoregressions. In an application with more than 100 macroeconomic time series, we further show that the p-values can not only select a sparser model with good prediction performance but also help control model stability. A companion R package boostvar is developed

    Dimension Reduction for High Dimensional Vector Autoregressive Models

    Full text link
    This paper aims to decompose a large dimensional vector autoregessive (VAR) model into two components, the first one being generated by a small-scale VAR and the second one being a white noise sequence. Hence, a reduced number of common factors generates the entire dynamics of the large system through a VAR structure. This modelling extends the common feature approach to high dimensional systems, and it differs from the dynamic factor models in which the idiosyncratic components can also embed a dynamic pattern. We show the conditions under which this decomposition exists, and we provide statistical tools to detect its presence in the data and to estimate the parameters of the underlying small-scale VAR model. We evaluate the practical value of the proposed methodology by simulations as well as by empirical applications on both economic and financial time series.Comment: 21 pages, 6 table

    Nonparametric empirical Bayes and compound decision approaches to estimation of a high-dimensional vector of normal means

    Get PDF
    We consider the classical problem of estimating a vector \bolds{\mu}=(\mu_1,...,\mu_n) based on independent observations Yi∼N(μi,1)Y_i\sim N(\mu_i,1), i=1,...,ni=1,...,n. Suppose μi\mu_i, i=1,...,ni=1,...,n are independent realizations from a completely unknown GG. We suggest an easily computed estimator \hat{\bolds{\mu}}, such that the ratio of its risk E(\hat{\bolds{\mu}}-\bolds{\mu})^2 with that of the Bayes procedure approaches 1. A related compound decision result is also obtained. Our asymptotics is of a triangular array; that is, we allow the distribution GG to depend on nn. Thus, our theoretical asymptotic results are also meaningful in situations where the vector \bolds{\mu} is sparse and the proportion of zero coordinates approaches 1. We demonstrate the performance of our estimator in simulations, emphasizing sparse setups. In ``moderately-sparse'' situations, our procedure performs very well compared to known procedures tailored for sparse setups. It also adapts well to nonsparse situations.Comment: Published in at http://dx.doi.org/10.1214/08-AOS630 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore