1,523 research outputs found

    Interpretable Vector AutoRegressions with Exogenous Time Series

    Full text link
    The Vector AutoRegressive (VAR) model is fundamental to the study of multivariate time series. Although VAR models are intensively investigated by many researchers, practitioners often show more interest in analyzing VARX models that incorporate the impact of unmodeled exogenous variables (X) into the VAR. However, since the parameter space grows quadratically with the number of time series, estimation quickly becomes challenging. While several proposals have been made to sparsely estimate large VAR models, the estimation of large VARX models is under-explored. Moreover, typically these sparse proposals involve a lasso-type penalty and do not incorporate lag selection into the estimation procedure. As a consequence, the resulting models may be difficult to interpret. In this paper, we propose a lag-based hierarchically sparse estimator, called "HVARX", for large VARX models. We illustrate the usefulness of HVARX on a cross-category management marketing application. Our results show how it provides a highly interpretable model, and improves out-of-sample forecast accuracy compared to a lasso-type approach.Comment: Presented at NIPS 2017 Symposium on Interpretable Machine Learnin

    Regularized estimation of linear functionals of precision matrices for high-dimensional time series

    Full text link
    This paper studies a Dantzig-selector type regularized estimator for linear functionals of high-dimensional linear processes. Explicit rates of convergence of the proposed estimator are obtained and they cover the broad regime from i.i.d. samples to long-range dependent time series and from sub-Gaussian innovations to those with mild polynomial moments. It is shown that the convergence rates depend on the degree of temporal dependence and the moment conditions of the underlying linear processes. The Dantzig-selector estimator is applied to the sparse Markowitz portfolio allocation and the optimal linear prediction for time series, in which the ratio consistency when compared with an oracle estimator is established. The effect of dependence and innovation moment conditions is further illustrated in the simulation study. Finally, the regularized estimator is applied to classify the cognitive states on a real fMRI dataset and to portfolio optimization on a financial dataset.Comment: 44 pages, 4 figure

    Transposable regularized covariance models with an application to missing data imputation

    Full text link
    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so-called transposable regularized covariance models allow for maximum likelihood estimation of the mean and nonsingular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS314 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Estimating Time-Varying Effective Connectivity in High-Dimensional fMRI Data Using Regime-Switching Factor Models

    Full text link
    Recent studies on analyzing dynamic brain connectivity rely on sliding-window analysis or time-varying coefficient models which are unable to capture both smooth and abrupt changes simultaneously. Emerging evidence suggests state-related changes in brain connectivity where dependence structure alternates between a finite number of latent states or regimes. Another challenge is inference of full-brain networks with large number of nodes. We employ a Markov-switching dynamic factor model in which the state-driven time-varying connectivity regimes of high-dimensional fMRI data are characterized by lower-dimensional common latent factors, following a regime-switching process. It enables a reliable, data-adaptive estimation of change-points of connectivity regimes and the massive dependencies associated with each regime. We consider the switching VAR to quantity the dynamic effective connectivity. We propose a three-step estimation procedure: (1) extracting the factors using principal component analysis (PCA) and (2) identifying dynamic connectivity states using the factor-based switching vector autoregressive (VAR) models in a state-space formulation using Kalman filter and expectation-maximization (EM) algorithm, and (3) constructing the high-dimensional connectivity metrics for each state based on subspace estimates. Simulation results show that our proposed estimator outperforms the K-means clustering of time-windowed coefficients, providing more accurate estimation of regime dynamics and connectivity metrics in high-dimensional settings. Applications to analyzing resting-state fMRI data identify dynamic changes in brain states during rest, and reveal distinct directed connectivity patterns and modular organization in resting-state networks across different states.Comment: 21 page
    corecore