1,412 research outputs found
A Deterministic Theory for Exact Non-Convex Phase Retrieval
In this paper, we analyze the non-convex framework of Wirtinger Flow (WF) for
phase retrieval and identify a novel sufficient condition for universal exact
recovery through the lens of low rank matrix recovery theory. Via a perspective
in the lifted domain, we show that the convergence of the WF iterates to a true
solution is attained geometrically under a single condition on the lifted
forward model. As a result, a deterministic relationship between the accuracy
of spectral initialization and the validity of {the regularity condition} is
derived. In particular, we determine that a certain concentration property on
the spectral matrix must hold uniformly with a sufficiently tight constant.
This culminates into a sufficient condition that is equivalent to a restricted
isometry-type property over rank-1, positive semi-definite matrices, and
amounts to a less stringent requirement on the lifted forward model than those
of prominent low-rank-matrix-recovery methods in the literature. We
characterize the performance limits of our framework in terms of the tightness
of the concentration property via novel bounds on the convergence rate and on
the signal-to-noise ratio such that the theoretical guarantees are valid using
the spectral initialization at the proper sample complexity.Comment: In Revision for IEEE Transactions on Signal Processin
Collaborative Spectrum Sensing from Sparse Observations in Cognitive Radio Networks
Spectrum sensing, which aims at detecting spectrum holes, is the precondition
for the implementation of cognitive radio (CR). Collaborative spectrum sensing
among the cognitive radio nodes is expected to improve the ability of checking
complete spectrum usage. Due to hardware limitations, each cognitive radio node
can only sense a relatively narrow band of radio spectrum. Consequently, the
available channel sensing information is far from being sufficient for
precisely recognizing the wide range of unoccupied channels. Aiming at breaking
this bottleneck, we propose to apply matrix completion and joint sparsity
recovery to reduce sensing and transmitting requirements and improve sensing
results. Specifically, equipped with a frequency selective filter, each
cognitive radio node senses linear combinations of multiple channel information
and reports them to the fusion center, where occupied channels are then decoded
from the reports by using novel matrix completion and joint sparsity recovery
algorithms. As a result, the number of reports sent from the CRs to the fusion
center is significantly reduced. We propose two decoding approaches, one based
on matrix completion and the other based on joint sparsity recovery, both of
which allow exact recovery from incomplete reports. The numerical results
validate the effectiveness and robustness of our approaches. In particular, in
small-scale networks, the matrix completion approach achieves exact channel
detection with a number of samples no more than 50% of the number of channels
in the network, while joint sparsity recovery achieves similar performance in
large-scale networks.Comment: 12 pages, 11 figure
Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion
A spectrally sparse signal of order is a mixture of damped or
undamped complex sinusoids. This paper investigates the problem of
reconstructing spectrally sparse signals from a random subset of regular
time domain samples, which can be reformulated as a low rank Hankel matrix
completion problem. We introduce an iterative hard thresholding (IHT) algorithm
and a fast iterative hard thresholding (FIHT) algorithm for efficient
reconstruction of spectrally sparse signals via low rank Hankel matrix
completion. Theoretical recovery guarantees have been established for FIHT,
showing that number of samples are sufficient for exact
recovery with high probability. Empirical performance comparisons establish
significant computational advantages for IHT and FIHT. In particular, numerical
simulations on D arrays demonstrate the capability of FIHT on handling large
and high-dimensional real data
Optimal selection of reduced rank estimators of high-dimensional matrices
We introduce a new criterion, the Rank Selection Criterion (RSC), for
selecting the optimal reduced rank estimator of the coefficient matrix in
multivariate response regression models. The corresponding RSC estimator
minimizes the Frobenius norm of the fit plus a regularization term proportional
to the number of parameters in the reduced rank model. The rank of the RSC
estimator provides a consistent estimator of the rank of the coefficient
matrix; in general, the rank of our estimator is a consistent estimate of the
effective rank, which we define to be the number of singular values of the
target matrix that are appropriately large. The consistency results are valid
not only in the classic asymptotic regime, when , the number of responses,
and , the number of predictors, stay bounded, and , the number of
observations, grows, but also when either, or both, and grow, possibly
much faster than . We establish minimax optimal bounds on the mean squared
errors of our estimators. Our finite sample performance bounds for the RSC
estimator show that it achieves the optimal balance between the approximation
error and the penalty term. Furthermore, our procedure has very low
computational complexity, linear in the number of candidate models, making it
particularly appealing for large scale problems. We contrast our estimator with
the nuclear norm penalized least squares (NNP) estimator, which has an
inherently higher computational complexity than RSC, for multivariate
regression models. We show that NNP has estimation properties similar to those
of RSC, albeit under stronger conditions. However, it is not as parsimonious as
RSC. We offer a simple correction of the NNP estimator which leads to
consistent rank estimation.Comment: Published in at http://dx.doi.org/10.1214/11-AOS876 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org) (some typos corrected
Recursive Importance Sketching for Rank Constrained Least Squares: Algorithms and High-order Convergence
In this paper, we propose a new {\it \underline{R}ecursive} {\it
\underline{I}mportance} {\it \underline{S}ketching} algorithm for {\it
\underline{R}ank} constrained least squares {\it \underline{O}ptimization}
(RISRO). As its name suggests, the algorithm is based on a new sketching
framework, recursive importance sketching. Several existing algorithms in the
literature can be reinterpreted under the new sketching framework and RISRO
offers clear advantages over them. RISRO is easy to implement and
computationally efficient, where the core procedure in each iteration is only
solving a dimension reduced least squares problem. Different from numerous
existing algorithms with locally geometric convergence rate, we establish the
local quadratic-linear and quadratic rate of convergence for RISRO under some
mild conditions. In addition, we discover a deep connection of RISRO to
Riemannian manifold optimization on fixed rank matrices. The effectiveness of
RISRO is demonstrated in two applications in machine learning and statistics:
low-rank matrix trace regression and phase retrieval. Simulation studies
demonstrate the superior numerical performance of RISRO
High Dimensional Statistical Estimation under Uniformly Dithered One-bit Quantization
In this paper, we propose a uniformly dithered 1-bit quantization scheme for
high-dimensional statistical estimation. The scheme contains truncation,
dithering, and quantization as typical steps. As canonical examples, the
quantization scheme is applied to the estimation problems of sparse covariance
matrix estimation, sparse linear regression (i.e., compressed sensing), and
matrix completion. We study both sub-Gaussian and heavy-tailed regimes, where
the underlying distribution of heavy-tailed data is assumed to have bounded
moments of some order. We propose new estimators based on 1-bit quantized data.
In sub-Gaussian regime, our estimators achieve near minimax rates, indicating
that our quantization scheme costs very little. In heavy-tailed regime, while
the rates of our estimators become essentially slower, these results are either
the first ones in an 1-bit quantized and heavy-tailed setting, or already
improve on existing comparable results from some respect. Under the
observations in our setting, the rates are almost tight in compressed sensing
and matrix completion. Our 1-bit compressed sensing results feature general
sensing vector that is sub-Gaussian or even heavy-tailed. We also first
investigate a novel setting where both the covariate and response are
quantized. In addition, our approach to 1-bit matrix completion does not rely
on likelihood and represent the first method robust to pre-quantization noise
with unknown distribution. Experimental results on synthetic data are presented
to support our theoretical analysis.Comment: We add lower bounds for 1-bit quantization of heavy-tailed data
(Theorems 11, 14
- …