23,285 research outputs found
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
We analyze a class of estimators based on convex relaxation for solving
high-dimensional matrix decomposition problems. The observations are noisy
realizations of a linear transformation of the sum of an
approximately) low rank matrix with a second matrix
endowed with a complementary form of low-dimensional structure;
this set-up includes many statistical models of interest, including factor
analysis, multi-task regression, and robust covariance estimation. We derive a
general theorem that bounds the Frobenius norm error for an estimate of the
pair obtained by solving a convex optimization
problem that combines the nuclear norm with a general decomposable regularizer.
Our results utilize a "spikiness" condition that is related to but milder than
singular vector incoherence. We specialize our general result to two cases that
have been studied in past work: low rank plus an entrywise sparse matrix, and
low rank plus a columnwise sparse matrix. For both models, our theory yields
non-asymptotic Frobenius error bounds for both deterministic and stochastic
noise matrices, and applies to matrices that can be exactly or
approximately low rank, and matrices that can be exactly or
approximately sparse. Moreover, for the case of stochastic noise matrices and
the identity observation operator, we establish matching lower bounds on the
minimax error. The sharpness of our predictions is confirmed by numerical
simulations.Comment: 41 pages, 2 figure
- …