23,720 research outputs found
Sparse Matrix Inversion with Scaled Lasso
We propose a new method of learning a sparse nonnegative-definite target
matrix. Our primary example of the target matrix is the inverse of a population
covariance or correlation matrix. The algorithm first estimates each column of
the target matrix by the scaled Lasso and then adjusts the matrix estimator to
be symmetric. The penalty level of the scaled Lasso for each column is
completely determined by data via convex minimization, without using
cross-validation.
We prove that this scaled Lasso method guarantees the fastest proven rate of
convergence in the spectrum norm under conditions of weaker form than those in
the existing analyses of other regularized algorithms, and has faster
guaranteed rate of convergence when the ratio of the and spectrum
norms of the target inverse matrix diverges to infinity. A simulation study
demonstrates the computational feasibility and superb performance of the
proposed method.
Our analysis also provides new performance bounds for the Lasso and scaled
Lasso to guarantee higher concentration of the error at a smaller threshold
level than previous analyses, and to allow the use of the union bound in
column-by-column applications of the scaled Lasso without an adjustment of the
penalty level. In addition, the least squares estimation after the scaled Lasso
selection is considered and proven to guarantee performance bounds similar to
that of the scaled Lasso
Calibrated Elastic Regularization in Matrix Completion
This paper concerns the problem of matrix completion, which is to estimate a
matrix from observations in a small subset of indices. We propose a calibrated
spectrum elastic net method with a sum of the nuclear and Frobenius penalties
and develop an iterative algorithm to solve the convex minimization problem.
The iterative algorithm alternates between imputing the missing entries in the
incomplete matrix by the current guess and estimating the matrix by a scaled
soft-thresholding singular value decomposition of the imputed matrix until the
resulting matrix converges. A calibration step follows to correct the bias
caused by the Frobenius penalty. Under proper coherence conditions and for
suitable penalties levels, we prove that the proposed estimator achieves an
error bound of nearly optimal order and in proportion to the noise level. This
provides a unified analysis of the noisy and noiseless matrix completion
problems. Simulation results are presented to compare our proposal with
previous ones.Comment: 9 pages; Advances in Neural Information Processing Systems, NIPS 201
- β¦