22,142 research outputs found
Reconstruction of a low-rank matrix in the presence of Gaussian noise
This paper addresses the problem of reconstructing a low-rank signal matrix observed with additive Gaussian noise. We first establish that, under mild assumptions, one can restrict attention to orthogonally equivariant reconstruction methods, which act only on the singular values of the observed matrix and do not affect its singular vectors. Using recent results in random matrix theory, we then propose a new reconstruction method that aims to reverse the effect of the noise on the singular value decomposition of the signal matrix. In conjunction with the proposed reconstruction method we also introduce a Kolmogorov–Smirnov based estimator of the noise variance
Sparsity Order Estimation from a Single Compressed Observation Vector
We investigate the problem of estimating the unknown degree of sparsity from
compressive measurements without the need to carry out a sparse recovery step.
While the sparsity order can be directly inferred from the effective rank of
the observation matrix in the multiple snapshot case, this appears to be
impossible in the more challenging single snapshot case. We show that specially
designed measurement matrices allow to rearrange the measurement vector into a
matrix such that its effective rank coincides with the effective sparsity
order. In fact, we prove that matrices which are composed of a Khatri-Rao
product of smaller matrices generate measurements that allow to infer the
sparsity order. Moreover, if some samples are used more than once, one of the
matrices needs to be Vandermonde. These structural constraints reduce the
degrees of freedom in choosing the measurement matrix which may incur in a
degradation in the achievable coherence. We thus also address suitable choices
of the measurement matrices. In particular, we analyze Khatri-Rao and
Vandermonde matrices in terms of their coherence and provide a new design for
Vandermonde matrices that achieves a low coherence
Algorithms for Approximate Subtropical Matrix Factorization
Matrix factorization methods are important tools in data mining and analysis.
They can be used for many tasks, ranging from dimensionality reduction to
visualization. In this paper we concentrate on the use of matrix factorizations
for finding patterns from the data. Rather than using the standard algebra --
and the summation of the rank-1 components to build the approximation of the
original matrix -- we use the subtropical algebra, which is an algebra over the
nonnegative real values with the summation replaced by the maximum operator.
Subtropical matrix factorizations allow "winner-takes-it-all" interpretations
of the rank-1 components, revealing different structure than the normal
(nonnegative) factorizations. We study the complexity and sparsity of the
factorizations, and present a framework for finding low-rank subtropical
factorizations. We present two specific algorithms, called Capricorn and
Cancer, that are part of our framework. They can be used with data that has
been corrupted with different types of noise, and with different error metrics,
including the sum-of-absolute differences, Frobenius norm, and Jensen--Shannon
divergence. Our experiments show that the algorithms perform well on data that
has subtropical structure, and that they can find factorizations that are both
sparse and easy to interpret.Comment: 40 pages, 9 figures. For the associated source code, see
http://people.mpi-inf.mpg.de/~pmiettin/tropical
Phase Retrieval From Binary Measurements
We consider the problem of signal reconstruction from quadratic measurements
that are encoded as +1 or -1 depending on whether they exceed a predetermined
positive threshold or not. Binary measurements are fast to acquire and
inexpensive in terms of hardware. We formulate the problem of signal
reconstruction using a consistency criterion, wherein one seeks to find a
signal that is in agreement with the measurements. To enforce consistency, we
construct a convex cost using a one-sided quadratic penalty and minimize it
using an iterative accelerated projected gradient-descent (APGD) technique. The
PGD scheme reduces the cost function in each iteration, whereas incorporating
momentum into PGD, notwithstanding the lack of such a descent property,
exhibits faster convergence than PGD empirically. We refer to the resulting
algorithm as binary phase retrieval (BPR). Considering additive white noise
contamination prior to quantization, we also derive the Cramer-Rao Bound (CRB)
for the binary encoding model. Experimental results demonstrate that the BPR
algorithm yields a signal-to- reconstruction error ratio (SRER) of
approximately 25 dB in the absence of noise. In the presence of noise prior to
quantization, the SRER is within 2 to 3 dB of the CRB
- …