1,061 research outputs found
Stable Principal Component Pursuit
In this paper, we study the problem of recovering a low-rank matrix (the
principal components) from a high-dimensional data matrix despite both small
entry-wise noise and gross sparse errors. Recently, it has been shown that a
convex program, named Principal Component Pursuit (PCP), can recover the
low-rank matrix when the data matrix is corrupted by gross sparse errors. We
further prove that the solution to a related convex program (a relaxed PCP)
gives an estimate of the low-rank matrix that is simultaneously stable to small
entrywise noise and robust to gross sparse errors. More precisely, our result
shows that the proposed convex program recovers the low-rank matrix even though
a positive fraction of its entries are arbitrarily corrupted, with an error
bound proportional to the noise level. We present simulation results to support
our result and demonstrate that the new convex program accurately recovers the
principal components (the low-rank matrix) under quite broad conditions. To our
knowledge, this is the first result that shows the classical Principal
Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to
gross sparse errors; or the first that shows the newly proposed PCP can be made
stable to small entry-wise perturbations.Comment: 5-page paper submitted to ISIT 201
Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing
This paper focuses on the estimation of low-complexity signals when they are
observed through uniformly quantized compressive observations. Among such
signals, we consider 1-D sparse vectors, low-rank matrices, or compressible
signals that are well approximated by one of these two models. In this context,
we prove the estimation efficiency of a variant of Basis Pursuit Denoise,
called Consistent Basis Pursuit (CoBP), enforcing consistency between the
observations and the re-observed estimate, while promoting its low-complexity
nature. We show that the reconstruction error of CoBP decays like
when all parameters but are fixed. Our proof is connected to recent bounds
on the proximity of vectors or matrices when (i) those belong to a set of small
intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they
share the same quantized (dithered) random projections. By solving CoBP with a
proximal algorithm, we provide some extensive numerical observations that
confirm the theoretical bound as is increased, displaying even faster error
decay than predicted. The same phenomenon is observed in the special, yet
important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency,
error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this
version: title change, typo corrections, clarification of the context, adding
a comparison with BPD
Unicity conditions for low-rank matrix recovery
Low-rank matrix recovery addresses the problem of recovering an unknown
low-rank matrix from few linear measurements. Nuclear-norm minimization is a
tractible approach with a recent surge of strong theoretical backing. Analagous
to the theory of compressed sensing, these results have required random
measurements. For example, m >= Cnr Gaussian measurements are sufficient to
recover any rank-r n x n matrix with high probability. In this paper we address
the theoretical question of how many measurements are needed via any method
whatsoever --- tractible or not. We show that for a family of random
measurement ensembles, m >= 4nr - 4r^2 measurements are sufficient to guarantee
that no rank-2r matrix lies in the null space of the measurement operator with
probability one. This is a necessary and sufficient condition to ensure uniform
recovery of all rank-r matrices by rank minimization. Furthermore, this value
of precisely matches the dimension of the manifold of all rank-2r matrices.
We also prove that for a fixed rank-r matrix, m >= 2nr - r^2 + 1 random
measurements are enough to guarantee recovery using rank minimization. These
results give a benchmark to which we may compare the efficacy of nuclear-norm
minimization
RIPless compressed sensing from anisotropic measurements
Compressed sensing is the art of reconstructing a sparse vector from its
inner products with respect to a small set of randomly chosen measurement
vectors. It is usually assumed that the ensemble of measurement vectors is in
isotropic position in the sense that the associated covariance matrix is
proportional to the identity matrix. In this paper, we establish bounds on the
number of required measurements in the anisotropic case, where the ensemble of
measurement vectors possesses a non-trivial covariance matrix. Essentially, we
find that the required sampling rate grows proportionally to the condition
number of the covariance matrix. In contrast to other recent contributions to
this problem, our arguments do not rely on any restricted isometry properties
(RIP's), but rather on ideas from convex geometry which have been
systematically studied in the theory of low-rank matrix recovery. This allows
for a simple argument and slightly improved bounds, but may lead to a worse
dependency on noise (which we do not consider in the present paper).Comment: 19 pages. To appear in Linear Algebra and its Applications, Special
Issue on Sparse Approximate Solution of Linear System
A probabilistic and RIPless theory of compressed sensing
This paper introduces a simple and very general theory of compressive
sensing. In this theory, the sensing mechanism simply selects sensing vectors
independently at random from a probability distribution F; it includes all
models - e.g. Gaussian, frequency measurements - discussed in the literature,
but also provides a framework for new measurement strategies as well. We prove
that if the probability distribution F obeys a simple incoherence property and
an isotropy property, one can faithfully recover approximately sparse signals
from a minimal number of noisy measurements. The novelty is that our recovery
results do not require the restricted isometry property (RIP) - they make use
of a much weaker notion - or a random model for the signal. As an example, the
paper shows that a signal with s nonzero entries can be faithfully recovered
from about s log n Fourier coefficients that are contaminated with noise.Comment: 36 page
- …