507 research outputs found
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
We analyze a class of estimators based on convex relaxation for solving
high-dimensional matrix decomposition problems. The observations are noisy
realizations of a linear transformation of the sum of an
approximately) low rank matrix with a second matrix
endowed with a complementary form of low-dimensional structure;
this set-up includes many statistical models of interest, including factor
analysis, multi-task regression, and robust covariance estimation. We derive a
general theorem that bounds the Frobenius norm error for an estimate of the
pair obtained by solving a convex optimization
problem that combines the nuclear norm with a general decomposable regularizer.
Our results utilize a "spikiness" condition that is related to but milder than
singular vector incoherence. We specialize our general result to two cases that
have been studied in past work: low rank plus an entrywise sparse matrix, and
low rank plus a columnwise sparse matrix. For both models, our theory yields
non-asymptotic Frobenius error bounds for both deterministic and stochastic
noise matrices, and applies to matrices that can be exactly or
approximately low rank, and matrices that can be exactly or
approximately sparse. Moreover, for the case of stochastic noise matrices and
the identity observation operator, we establish matching lower bounds on the
minimax error. The sharpness of our predictions is confirmed by numerical
simulations.Comment: 41 pages, 2 figure
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
A Geometric View on Constrained M-Estimators
We study the estimation error of constrained M-estimators, and derive
explicit upper bounds on the expected estimation error determined by the
Gaussian width of the constraint set. Both of the cases where the true
parameter is on the boundary of the constraint set (matched constraint), and
where the true parameter is strictly in the constraint set (mismatched
constraint) are considered. For both cases, we derive novel universal
estimation error bounds for regression in a generalized linear model with the
canonical link function. Our error bound for the mismatched constraint case is
minimax optimal in terms of its dependence on the sample size, for Gaussian
linear regression by the Lasso
Combinatorial Penalties: Which structures are preserved by convex relaxations?
We consider the homogeneous and the non-homogeneous convex relaxations for
combinatorial penalty functions defined on support sets. Our study identifies
key differences in the tightness of the resulting relaxations through the
notion of the lower combinatorial envelope of a set-function along with new
necessary conditions for support identification. We then propose a general
adaptive estimator for convex monotone regularizers, and derive new sufficient
conditions for support recovery in the asymptotic setting
Exponential Family Matrix Completion under Structural Constraints
We consider the matrix completion problem of recovering a structured matrix
from noisy and partial measurements. Recent works have proposed tractable
estimators with strong statistical guarantees for the case where the underlying
matrix is low--rank, and the measurements consist of a subset, either of the
exact individual entries, or of the entries perturbed by additive Gaussian
noise, which is thus implicitly suited for thin--tailed continuous data.
Arguably, common applications of matrix completion require estimators for (a)
heterogeneous data--types, such as skewed--continuous, count, binary, etc., (b)
for heterogeneous noise models (beyond Gaussian), which capture varied
uncertainty in the measurements, and (c) heterogeneous structural constraints
beyond low--rank, such as block--sparsity, or a superposition structure of
low--rank plus elementwise sparseness, among others. In this paper, we provide
a vastly unified framework for generalized matrix completion by considering a
matrix completion setting wherein the matrix entries are sampled from any
member of the rich family of exponential family distributions; and impose
general structural constraints on the underlying matrix, as captured by a
general regularizer . We propose a simple convex regularized
--estimator for the generalized framework, and provide a unified and novel
statistical analysis for this general class of estimators. We finally
corroborate our theoretical results on simulated datasets.Comment: 20 pages, 9 figure
- …