23,890 research outputs found
Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery
PCA is one of the most widely used dimension reduction techniques. A related
easier problem is "subspace learning" or "subspace estimation". Given
relatively clean data, both are easily solved via singular value decomposition
(SVD). The problem of subspace learning or PCA in the presence of outliers is
called robust subspace learning or robust PCA (RPCA). For long data sequences,
if one tries to use a single lower dimensional subspace to represent the data,
the required subspace dimension may end up being quite large. For such data, a
better model is to assume that it lies in a low-dimensional subspace that can
change over time, albeit gradually. The problem of tracking such data (and the
subspaces) while being robust to outliers is called robust subspace tracking
(RST). This article provides a magazine-style overview of the entire field of
robust subspace learning and tracking. In particular solutions for three
problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition
(S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an
entire data vector is either an outlier or an inlier. The S+LR formulation
instead assumes that outliers occur on only a few data vector indices and hence
are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201
Block Stability for MAP Inference
To understand the empirical success of approximate MAP inference, recent work
(Lang et al., 2018) has shown that some popular approximation algorithms
perform very well when the input instance is stable. The simplest stability
condition assumes that the MAP solution does not change at all when some of the
pairwise potentials are (adversarially) perturbed. Unfortunately, this strong
condition does not seem to be satisfied in practice. In this paper, we
introduce a significantly more relaxed condition that only requires blocks
(portions) of an input instance to be stable. Under this block stability
condition, we prove that the pairwise LP relaxation is persistent on the stable
blocks. We complement our theoretical results with an empirical evaluation of
real-world MAP inference instances from computer vision. We design an algorithm
to find stable blocks, and find that these real instances have large stable
regions. Our work gives a theoretical explanation for the widespread empirical
phenomenon of persistency for this LP relaxation
- …