1,266 research outputs found
Compressive Nonparametric Graphical Model Selection For Time Series
We propose a method for inferring the conditional indepen- dence graph (CIG)
of a high-dimensional discrete-time Gaus- sian vector random process from
finite-length observations. Our approach does not rely on a parametric model
(such as, e.g., an autoregressive model) for the vector random process; rather,
it only assumes certain spectral smoothness proper- ties. The proposed
inference scheme is compressive in that it works for sample sizes that are
(much) smaller than the number of scalar process components. We provide
analytical conditions for our method to correctly identify the CIG with high
probability.Comment: to appear in Proc. IEEE ICASSP 201
Graphical LASSO Based Model Selection for Time Series
We propose a novel graphical model selection (GMS) scheme for
high-dimensional stationary time series or discrete time process. The method is
based on a natural generalization of the graphical LASSO (gLASSO), introduced
originally for GMS based on i.i.d. samples, and estimates the conditional
independence graph (CIG) of a time series from a finite length observation. The
gLASSO for time series is defined as the solution of an l1-regularized maximum
(approximate) likelihood problem. We solve this optimization problem using the
alternating direction method of multipliers (ADMM). Our approach is
nonparametric as we do not assume a finite dimensional (e.g., an
autoregressive) parametric model for the observed process. Instead, we require
the process to be sufficiently smooth in the spectral domain. For Gaussian
processes, we characterize the performance of our method theoretically by
deriving an upper bound on the probability that our algorithm fails to
correctly identify the CIG. Numerical experiments demonstrate the ability of
our method to recover the correct CIG from a limited amount of samples
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
Covariate assisted screening and estimation
Consider a linear model , where and .
The vector is unknown but is sparse in the sense that most of its
coordinates are . The main interest is to separate its nonzero coordinates
from the zero ones (i.e., variable selection). Motivated by examples in
long-memory time series (Fan and Yao [Nonlinear Time Series: Nonparametric and
Parametric Methods (2003) Springer]) and the change-point problem (Bhattacharya
[In Change-Point Problems (South Hadley, MA, 1992) (1994) 28-56 IMS]), we are
primarily interested in the case where the Gram matrix is nonsparse but
sparsifiable by a finite order linear filter. We focus on the regime where
signals are both rare and weak so that successful variable selection is very
challenging but is still possible. We approach this problem by a new procedure
called the covariate assisted screening and estimation (CASE). CASE first uses
a linear filtering to reduce the original setting to a new regression model
where the corresponding Gram (covariance) matrix is sparse. The new covariance
matrix induces a sparse graph, which guides us to conduct multivariate
screening without visiting all the submodels. By interacting with the signal
sparsity, the graph enables us to decompose the original problem into many
separated small-size subproblems (if only we know where they are!). Linear
filtering also induces a so-called problem of information leakage, which can be
overcome by the newly introduced patching technique. Together, these give rise
to CASE, which is a two-stage screen and clean [Fan and Song Ann. Statist. 38
(2010) 3567-3604; Wasserman and Roeder Ann. Statist. 37 (2009) 2178-2201]
procedure, where we first identify candidates of these submodels by patching
and screening, and then re-examine each candidate to remove false positives.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1243 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- âŠ