22,758 research outputs found
A Simplified Approach to Recovery Conditions for Low Rank Matrices
Recovering sparse vectors and low-rank matrices from noisy linear
measurements has been the focus of much recent research. Various reconstruction
algorithms have been studied, including and nuclear norm minimization
as well as minimization with . These algorithms are known to
succeed if certain conditions on the measurement map are satisfied. Proofs of
robust recovery for matrices have so far been much more involved than in the
vector case.
In this paper, we show how several robust classes of recovery conditions can
be extended from vectors to matrices in a simple and transparent way, leading
to the best known restricted isometry and nullspace conditions for matrix
recovery. Our results rely on the ability to "vectorize" matrices through the
use of a key singular value inequality.Comment: 6 pages, This is a modified version of a paper submitted to ISIT
2011; Proc. Intl. Symp. Info. Theory (ISIT), Aug 201
Recovery of Low-Rank Matrices under Affine Constraints via a Smoothed Rank Function
In this paper, the problem of matrix rank minimization under affine
constraints is addressed. The state-of-the-art algorithms can recover matrices
with a rank much less than what is sufficient for the uniqueness of the
solution of this optimization problem. We propose an algorithm based on a
smooth approximation of the rank function, which practically improves recovery
limits on the rank of the solution. This approximation leads to a non-convex
program; thus, to avoid getting trapped in local solutions, we use the
following scheme. Initially, a rough approximation of the rank function subject
to the affine constraints is optimized. As the algorithm proceeds, finer
approximations of the rank are optimized and the solver is initialized with the
solution of the previous approximation until reaching the desired accuracy.
On the theoretical side, benefiting from the spherical section property, we
will show that the sequence of the solutions of the approximating function
converges to the minimum rank solution. On the experimental side, it will be
shown that the proposed algorithm, termed SRF standing for Smoothed Rank
Function, can recover matrices which are unique solutions of the rank
minimization problem and yet not recoverable by nuclear norm minimization.
Furthermore, it will be demonstrated that, in completing partially observed
matrices, the accuracy of SRF is considerably and consistently better than some
famous algorithms when the number of revealed entries is close to the minimum
number of parameters that uniquely represent a low-rank matrix.Comment: Accepted in IEEE TSP on December 4th, 201
High Dimensional Low Rank plus Sparse Matrix Decomposition
This paper is concerned with the problem of low rank plus sparse matrix
decomposition for big data. Conventional algorithms for matrix decomposition
use the entire data to extract the low-rank and sparse components, and are
based on optimization problems with complexity that scales with the dimension
of the data, which limits their scalability. Furthermore, existing randomized
approaches mostly rely on uniform random sampling, which is quite inefficient
for many real world data matrices that exhibit additional structures (e.g.
clustering). In this paper, a scalable subspace-pursuit approach that
transforms the decomposition problem to a subspace learning problem is
proposed. The decomposition is carried out using a small data sketch formed
from sampled columns/rows. Even when the data is sampled uniformly at random,
it is shown that the sufficient number of sampled columns/rows is roughly
O(r\mu), where \mu is the coherency parameter and r the rank of the low rank
component. In addition, adaptive sampling algorithms are proposed to address
the problem of column/row sampling from structured data. We provide an analysis
of the proposed method with adaptive sampling and show that adaptive sampling
makes the required number of sampled columns/rows invariant to the distribution
of the data. The proposed approach is amenable to online implementation and an
online scheme is proposed.Comment: IEEE Transactions on Signal Processin
Accurate and Efficient Private Release of Datacubes and Contingency Tables
A central problem in releasing aggregate information about sensitive data is
to do so accurately while providing a privacy guarantee on the output. Recent
work focuses on the class of linear queries, which include basic counting
queries, data cubes, and contingency tables. The goal is to maximize the
utility of their output, while giving a rigorous privacy guarantee. Most
results follow a common template: pick a "strategy" set of linear queries to
apply to the data, then use the noisy answers to these queries to reconstruct
the queries of interest. This entails either picking a strategy set that is
hoped to be good for the queries, or performing a costly search over the space
of all possible strategies.
In this paper, we propose a new approach that balances accuracy and
efficiency: we show how to improve the accuracy of a given query set by
answering some strategy queries more accurately than others. This leads to an
efficient optimal noise allocation for many popular strategies, including
wavelets, hierarchies, Fourier coefficients and more. For the important case of
marginal queries we show that this strictly improves on previous methods, both
analytically and empirically. Our results also extend to ensuring that the
returned query answers are consistent with an (unknown) data set at minimal
extra cost in terms of time and noise
- …