51 research outputs found
RIPless compressed sensing from anisotropic measurements
Compressed sensing is the art of reconstructing a sparse vector from its
inner products with respect to a small set of randomly chosen measurement
vectors. It is usually assumed that the ensemble of measurement vectors is in
isotropic position in the sense that the associated covariance matrix is
proportional to the identity matrix. In this paper, we establish bounds on the
number of required measurements in the anisotropic case, where the ensemble of
measurement vectors possesses a non-trivial covariance matrix. Essentially, we
find that the required sampling rate grows proportionally to the condition
number of the covariance matrix. In contrast to other recent contributions to
this problem, our arguments do not rely on any restricted isometry properties
(RIP's), but rather on ideas from convex geometry which have been
systematically studied in the theory of low-rank matrix recovery. This allows
for a simple argument and slightly improved bounds, but may lead to a worse
dependency on noise (which we do not consider in the present paper).Comment: 19 pages. To appear in Linear Algebra and its Applications, Special
Issue on Sparse Approximate Solution of Linear System
Sparse projections onto the simplex
Most learning methods with rank or sparsity constraints use convex
relaxations, which lead to optimization with the nuclear norm or the
-norm. However, several important learning applications cannot benefit
from this approach as they feature these convex norms as constraints in
addition to the non-convex rank and sparsity constraints. In this setting, we
derive efficient sparse projections onto the simplex and its extension, and
illustrate how to use them to solve high-dimensional learning problems in
quantum tomography, sparse density estimation and portfolio selection with
non-convex constraints.Comment: 9 Page
Randomized Low-Memory Singular Value Projection
Affine rank minimization algorithms typically rely on calculating the
gradient of a data error followed by a singular value decomposition at every
iteration. Because these two steps are expensive, heuristic approximations are
often used to reduce computational burden. To this end, we propose a recovery
scheme that merges the two steps with randomized approximations, and as a
result, operates on space proportional to the degrees of freedom in the
problem. We theoretically establish the estimation guarantees of the algorithm
as a function of approximation tolerance. While the theoretical approximation
requirements are overly pessimistic, we demonstrate that in practice the
algorithm performs well on the quantum tomography recovery problem.Comment: 13 pages. This version has a revised theorem and new numerical
experiment
Recovering Quantum Gates from Few Average Gate Fidelities
Characterizing quantum processes is a key task in the development of quantum technologies, especially at the noisy intermediate scale of today’s devices. One method for characterizing processes is randomized benchmarking, which is robust against state preparation and measurement errors and can be used to benchmark Clifford gates. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. In this Letter, we show that the favorable features of both approaches can be combined. For characterizing multiqubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured with respect to random Clifford unitaries. Moreover, for general unital quantum channels, we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity—a figure of merit characterizing the coherence of a process
A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank
Rank minimization has attracted a lot of attention due to its robustness in
data recovery. To overcome the computational difficulty, rank is often replaced
with nuclear norm. For several rank minimization problems, such a replacement
has been theoretically proven to be valid, i.e., the solution to nuclear norm
minimization problem is also the solution to rank minimization problem.
Although it is easy to believe that such a replacement may not always be valid,
no concrete example has ever been found. We argue that such a validity checking
cannot be done by numerical computation and show, by analyzing the noiseless
latent low rank representation (LatLRR) model, that even for very simple rank
minimization problems the validity may still break down. As a by-product, we
find that the solution to the nuclear norm minimization formulation of LatLRR
is non-unique. Hence the results of LatLRR reported in the literature may be
questionable.Comment: accepted by ECML PKDD201
Structured random measurements in signal processing
Compressed sensing and its extensions have recently triggered interest in
randomized signal acquisition. A key finding is that random measurements
provide sparse signal reconstruction guarantees for efficient and stable
algorithms with a minimal number of samples. While this was first shown for
(unstructured) Gaussian random measurement matrices, applications require
certain structure of the measurements leading to structured random measurement
matrices. Near optimal recovery guarantees for such structured measurements
have been developed over the past years in a variety of contexts. This article
surveys the theory in three scenarios: compressed sensing (sparse recovery),
low rank matrix recovery, and phaseless estimation. The random measurement
matrices to be considered include random partial Fourier matrices, partial
random circulant matrices (subsampled convolutions), matrix completion, and
phase estimation from magnitudes of Fourier type measurements. The article
concludes with a brief discussion of the mathematical techniques for the
analysis of such structured random measurements.Comment: 22 pages, 2 figure
- …