9 research outputs found
Riemannian thresholding methods for row-sparse and low-rank matrix recovery
In this paper, we present modifications of the iterative hard thresholding (IHT) method for recovery of jointly row-sparse and low-rank matrices. In particular, a Riemannian version of IHT is considered which significantly reduces computational cost of the gradient projection in the case of rank-one measurement operators, which have concrete applications in blind deconvolution. Experimental results are reported that show near-optimal recovery for Gaussian and rank-one measurements, and that adaptive stepsizes give crucial improvement. A Riemannian proximal gradient method is derived for the special case of unknown sparsity
Riemannian thresholding methods for row-sparse and low-rank matrix recovery
In this paper, we present modifications of the iterative hard thresholding
(IHT) method for recovery of jointly row-sparse and low-rank matrices. In
particular a Riemannian version of IHT is considered which significantly
reduces computational cost of the gradient projection in the case of rank-one
measurement operators, which have concrete applications in blind deconvolution.
Experimental results are reported that show near-optimal recovery for Gaussian
and rank-one measurements, and that adaptive stepsizes give crucial
improvement. A Riemannian proximal gradient method is derived for the special
case of unknown sparsity
Bridging Convex and Nonconvex Optimization in Robust PCA: Noise, Outliers, and Missing Data
This paper delivers improved theoretical guarantees for the convex
programming approach in low-rank matrix estimation, in the presence of (1)
random noise, (2) gross sparse outliers, and (3) missing data. This problem,
often dubbed as robust principal component analysis (robust PCA), finds
applications in various domains. Despite the wide applicability of convex
relaxation, the available statistical support (particularly the stability
analysis vis-a-vis random noise) remains highly suboptimal, which we strengthen
in this paper. When the unknown matrix is well-conditioned, incoherent, and of
constant rank, we demonstrate that a principled convex program achieves
near-optimal statistical accuracy, in terms of both the Euclidean loss and the
loss. All of this happens even when nearly a constant fraction
of observations are corrupted by outliers with arbitrary magnitudes. The key
analysis idea lies in bridging the convex program in use and an auxiliary
nonconvex optimization algorithm, and hence the title of this paper
Inference and Uncertainty Quantification for Noisy Matrix Completion
Noisy matrix completion aims at estimating a low-rank matrix given only
partial and corrupted entries. Despite substantial progress in designing
efficient estimation algorithms, it remains largely unclear how to assess the
uncertainty of the obtained estimates and how to perform statistical inference
on the unknown matrix (e.g.~constructing a valid and short confidence interval
for an unseen entry).
This paper takes a step towards inference and uncertainty quantification for
noisy matrix completion. We develop a simple procedure to compensate for the
bias of the widely used convex and nonconvex estimators. The resulting
de-biased estimators admit nearly precise non-asymptotic distributional
characterizations, which in turn enable optimal construction of confidence
intervals\,/\,regions for, say, the missing entries and the low-rank factors.
Our inferential procedures do not rely on sample splitting, thus avoiding
unnecessary loss of data efficiency. As a byproduct, we obtain a sharp
characterization of the estimation accuracy of our de-biased estimators, which,
to the best of our knowledge, are the first tractable algorithms that provably
achieve full statistical efficiency (including the preconstant). The analysis
herein is built upon the intimate link between convex and nonconvex
optimization --- an appealing feature recently discovered by
\cite{chen2019noisy}.Comment: published at Proceedings of the National Academy of Sciences Nov
2019, 116 (46) 22931-2293
On the convex geometry of blind deconvolution and matrix completion
Low-rank matrix recovery from structured measurements has been a topic of intense study in the last decade and many important problems like matrix completion and blind deconvolution have been formulated in this framework. An important benchmark method to solve these problems is to minimize the nuclear norm, a convex proxy for the rank. A common approach to establish recovery guarantees for this convex program relies on the construction of a so-called approximate dual certificate. However, this approach provides only limited insight into various respects. Most prominently, the noise bounds exhibit seemingly suboptimal dimension factors. In this paper we take a novel, more geometric viewpoint to analyze both the matrix completion and the blind deconvolution scenario. We find that for both these applications the dimension factors in the noise bounds are not an artifact of the proof, but the problems are intrinsically badly conditioned. We show, however, that bad conditioning only arises for very small noise levels: Under mild assumptions that include many realistic noise levels we derive near-optimal error estimates for blind deconvolution under adversarial noise