3,028 research outputs found
Link Prediction in Graphs with Autoregressive Features
In the paper, we consider the problem of link prediction in time-evolving
graphs. We assume that certain graph features, such as the node degree, follow
a vector autoregressive (VAR) model and we propose to use this information to
improve the accuracy of prediction. Our strategy involves a joint optimization
procedure over the space of adjacency matrices and VAR matrices which takes
into account both sparsity and low rank properties of the matrices. Oracle
inequalities are derived and illustrate the trade-offs in the choice of
smoothing parameters when modeling the joint effect of sparsity and low rank
property. The estimate is computed efficiently using proximal methods through a
generalized forward-backward agorithm.Comment: NIPS 201
mgm: Estimating Time-Varying Mixed Graphical Models in High-Dimensional Data
We present the R-package mgm for the estimation of k-order Mixed Graphical
Models (MGMs) and mixed Vector Autoregressive (mVAR) models in high-dimensional
data. These are a useful extensions of graphical models for only one variable
type, since data sets consisting of mixed types of variables (continuous,
count, categorical) are ubiquitous. In addition, we allow to relax the
stationarity assumption of both models by introducing time-varying versions
MGMs and mVAR models based on a kernel weighting approach. Time-varying models
offer a rich description of temporally evolving systems and allow to identify
external influences on the model structure such as the impact of interventions.
We provide the background of all implemented methods and provide fully
reproducible examples that illustrate how to use the package
A Direct Estimation of High Dimensional Stationary Vector Autoregressions
The vector autoregressive (VAR) model is a powerful tool in modeling complex
time series and has been exploited in many fields. However, fitting high
dimensional VAR model poses some unique challenges: On one hand, the
dimensionality, caused by modeling a large number of time series and higher
order autoregressive processes, is usually much higher than the time series
length; On the other hand, the temporal dependence structure in the VAR model
gives rise to extra theoretical challenges. In high dimensions, one popular
approach is to assume the transition matrix is sparse and fit the VAR model
using the "least squares" method with a lasso-type penalty. In this manuscript,
we propose an alternative way in estimating the VAR model. The main idea is,
via exploiting the temporal dependence structure, to formulate the estimating
problem into a linear program. There is instant advantage for the proposed
approach over the lasso-type estimators: The estimation equation can be
decomposed into multiple sub-equations and accordingly can be efficiently
solved in a parallel fashion. In addition, our method brings new theoretical
insights into the VAR model analysis. So far the theoretical results developed
in high dimensions (e.g., Song and Bickel (2011) and Kock and Callot (2012))
mainly pose assumptions on the design matrix of the formulated regression
problems. Such conditions are indirect about the transition matrices and not
transparent. In contrast, our results show that the operator norm of the
transition matrices plays an important role in estimation accuracy. We provide
explicit rates of convergence for both estimation and prediction. In addition,
we provide thorough experiments on both synthetic and real-world equity data to
show that there are empirical advantages of our method over the lasso-type
estimators in both parameter estimation and forecasting.Comment: 36 pages, 3 figur
- …