165 research outputs found
Exponential Family Matrix Completion under Structural Constraints
We consider the matrix completion problem of recovering a structured matrix
from noisy and partial measurements. Recent works have proposed tractable
estimators with strong statistical guarantees for the case where the underlying
matrix is low--rank, and the measurements consist of a subset, either of the
exact individual entries, or of the entries perturbed by additive Gaussian
noise, which is thus implicitly suited for thin--tailed continuous data.
Arguably, common applications of matrix completion require estimators for (a)
heterogeneous data--types, such as skewed--continuous, count, binary, etc., (b)
for heterogeneous noise models (beyond Gaussian), which capture varied
uncertainty in the measurements, and (c) heterogeneous structural constraints
beyond low--rank, such as block--sparsity, or a superposition structure of
low--rank plus elementwise sparseness, among others. In this paper, we provide
a vastly unified framework for generalized matrix completion by considering a
matrix completion setting wherein the matrix entries are sampled from any
member of the rich family of exponential family distributions; and impose
general structural constraints on the underlying matrix, as captured by a
general regularizer . We propose a simple convex regularized
--estimator for the generalized framework, and provide a unified and novel
statistical analysis for this general class of estimators. We finally
corroborate our theoretical results on simulated datasets.Comment: 20 pages, 9 figure
High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods
In this paper we consider the task of estimating the non-zero pattern of the
sparse inverse covariance matrix of a zero-mean Gaussian random vector from a
set of iid samples. Note that this is also equivalent to recovering the
underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We
present two novel greedy approaches to solving this problem. The first
estimates the non-zero covariates of the overall inverse covariance matrix
using a series of global forward and backward greedy steps. The second
estimates the neighborhood of each node in the graph separately, again using
greedy forward and backward steps, and combines the intermediate neighborhoods
to form an overall estimate. The principal contribution of this paper is a
rigorous analysis of the sparsistency, or consistency in recovering the
sparsity pattern of the inverse covariance matrix. Surprisingly, we show that
both the local and global greedy methods learn the full structure of the model
with high probability given just samples, which is a
\emph{significant} improvement over state of the art -regularized
Gaussian MLE (Graphical Lasso) that requires samples. Moreover,
the restricted eigenvalue and smoothness conditions imposed by our greedy
methods are much weaker than the strong irrepresentable conditions required by
the -regularization based methods. We corroborate our results with
extensive simulations and examples, comparing our local and global greedy
methods to the -regularized Gaussian MLE as well as the Neighborhood
Greedy method to that of nodewise -regularized linear regression
(Neighborhood Lasso).Comment: Accepted to AI STAT 2012 for Oral Presentatio
High-dimensional Ising model selection using -regularized logistic regression
We consider the problem of estimating the graph associated with a binary
Ising Markov random field. We describe a method based on -regularized
logistic regression, in which the neighborhood of any given node is estimated
by performing logistic regression subject to an -constraint. The method
is analyzed under high-dimensional scaling in which both the number of nodes
and maximum neighborhood size are allowed to grow as a function of the
number of observations . Our main results provide sufficient conditions on
the triple and the model parameters for the method to succeed in
consistently estimating the neighborhood of every node in the graph
simultaneously. With coherence conditions imposed on the population Fisher
information matrix, we prove that consistent neighborhood selection can be
obtained for sample sizes with exponentially decaying
error. When these same conditions are imposed directly on the sample matrices,
we show that a reduced sample size of suffices for the
method to estimate neighborhoods consistently. Although this paper focuses on
the binary graphical models, we indicate how a generalization of the method of
the paper would apply to general discrete Markov random fields.Comment: Published in at http://dx.doi.org/10.1214/09-AOS691 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Sparse Additive Models
We present a new class of methods for high-dimensional nonparametric
regression and classification called sparse additive models (SpAM). Our methods
combine ideas from sparse linear modeling and additive nonparametric
regression. We derive an algorithm for fitting the models that is practical and
effective even when the number of covariates is larger than the sample size.
SpAM is closely related to the COSSO model of Lin and Zhang (2006), but
decouples smoothing and sparsity, enabling the use of arbitrary nonparametric
smoothers. An analysis of the theoretical properties of SpAM is given. We also
study a greedy estimator that is a nonparametric version of forward stepwise
regression. Empirical results on synthetic and real data are presented, showing
that SpAM can be effective in fitting sparse nonparametric models in high
dimensional data
- β¦