1,633 research outputs found
Minimax rank estimation for subspace tracking
Rank estimation is a classical model order selection problem that arises in a
variety of important statistical signal and array processing systems, yet is
addressed relatively infrequently in the extant literature. Here we present
sample covariance asymptotics stemming from random matrix theory, and bring
them to bear on the problem of optimal rank estimation in the context of the
standard array observation model with additive white Gaussian noise. The most
significant of these results demonstrates the existence of a phase transition
threshold, below which eigenvalues and associated eigenvectors of the sample
covariance fail to provide any information on population eigenvalues. We then
develop a decision-theoretic rank estimation framework that leads to a simple
ordered selection rule based on thresholding; in contrast to competing
approaches, however, it admits asymptotic minimax optimality and is free of
tuning parameters. We analyze the asymptotic performance of our rank selection
procedure and conclude with a brief simulation study demonstrating its
practical efficacy in the context of subspace tracking.Comment: 10 pages, 4 figures; final versio
On the Effective Measure of Dimension in the Analysis Cosparse Model
Many applications have benefited remarkably from low-dimensional models in
the recent decade. The fact that many signals, though high dimensional, are
intrinsically low dimensional has given the possibility to recover them stably
from a relatively small number of their measurements. For example, in
compressed sensing with the standard (synthesis) sparsity prior and in matrix
completion, the number of measurements needed is proportional (up to a
logarithmic factor) to the signal's manifold dimension.
Recently, a new natural low-dimensional signal model has been proposed: the
cosparse analysis prior. In the noiseless case, it is possible to recover
signals from this model, using a combinatorial search, from a number of
measurements proportional to the signal's manifold dimension. However, if we
ask for stability to noise or an efficient (polynomial complexity) solver, all
the existing results demand a number of measurements which is far removed from
the manifold dimension, sometimes far greater. Thus, it is natural to ask
whether this gap is a deficiency of the theory and the solvers, or if there
exists a real barrier in recovering the cosparse signals by relying only on
their manifold dimension. Is there an algorithm which, in the presence of
noise, can accurately recover a cosparse signal from a number of measurements
proportional to the manifold dimension? In this work, we prove that there is no
such algorithm. Further, we show through numerical simulations that even in the
noiseless case convex relaxations fail when the number of measurements is
comparable to the manifold dimension. This gives a practical counter-example to
the growing literature on compressed acquisition of signals based on manifold
dimension.Comment: 19 pages, 6 figure
On the Power of Adaptivity in Matrix Completion and Approximation
We consider the related tasks of matrix completion and matrix approximation
from missing data and propose adaptive sampling procedures for both problems.
We show that adaptive sampling allows one to eliminate standard incoherence
assumptions on the matrix row space that are necessary for passive sampling
procedures. For exact recovery of a low-rank matrix, our algorithm judiciously
selects a few columns to observe in full and, with few additional measurements,
projects the remaining columns onto their span. This algorithm exactly recovers
an rank matrix using observations,
where is a coherence parameter on the column space of the matrix. In
addition to completely eliminating any row space assumptions that have pervaded
the literature, this algorithm enjoys a better sample complexity than any
existing matrix completion algorithm. To certify that this improvement is due
to adaptive sampling, we establish that row space coherence is necessary for
passive sampling algorithms to achieve non-trivial sample complexity bounds.
For constructing a low-rank approximation to a high-rank input matrix, we
propose a simple algorithm that thresholds the singular values of a zero-filled
version of the input matrix. The algorithm computes an approximation that is
nearly as good as the best rank- approximation using
samples, where is a slightly different coherence parameter on the matrix
columns. Again we eliminate assumptions on the row space
- …