17 research outputs found
Projected Wirtinger Gradient Descent for Low-Rank Hankel Matrix Completion in Spectral Compressed Sensing
This paper considers reconstructing a spectrally sparse signal from a small
number of randomly observed time-domain samples. The signal of interest is a
linear combination of complex sinusoids at distinct frequencies. The
frequencies can assume any continuous values in the normalized frequency domain
. After converting the spectrally sparse signal recovery into a low rank
structured matrix completion problem, we propose an efficient feasible point
approach, named projected Wirtinger gradient descent (PWGD) algorithm, to
efficiently solve this structured matrix completion problem. We further
accelerate our proposed algorithm by a scheme inspired by FISTA. We give the
convergence analysis of our proposed algorithms. Extensive numerical
experiments are provided to illustrate the efficiency of our proposed
algorithm. Different from earlier approaches, our algorithm can solve problems
of very large dimensions very efficiently.Comment: 12 page
Exploiting the structure effectively and efficiently in low rank matrix recovery
Low rank model arises from a wide range of applications, including machine
learning, signal processing, computer algebra, computer vision, and imaging
science. Low rank matrix recovery is about reconstructing a low rank matrix
from incomplete measurements. In this survey we review recent developments on
low rank matrix recovery, focusing on three typical scenarios: matrix sensing,
matrix completion and phase retrieval. An overview of effective and efficient
approaches for the problem is given, including nuclear norm minimization,
projected gradient descent based on matrix factorization, and Riemannian
optimization based on the embedded manifold of low rank matrices. Numerical
recipes of different approaches are emphasized while accompanied by the
corresponding theoretical recovery guarantees
Spectral Compressed Sensing via Projected Gradient Descent
Let be a spectrally sparse signal consisting of
complex sinusoids with or without damping. We consider the spectral compressed
sensing problem, which is about reconstructing from its partial revealed
entries. By utilizing the low rank structure of the Hankel matrix corresponding
to , we develop a computationally efficient algorithm for this problem. The
algorithm starts from an initial guess computed via one-step hard thresholding
followed by projection, and then proceeds by applying projected gradient
descent iterations to a non-convex functional. Based on the sampling with
replacement model, we prove that observed entries are
sufficient for our algorithm to achieve the successful recovery of a spectrally
sparse signal. Moreover, extensive empirical performance comparisons show that
our algorithm is competitive with other state-of-the-art spectral compressed
sensing algorithms in terms of phase transitions and overall computational
time
Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation
Low-rank modeling plays a pivotal role in signal processing and machine
learning, with applications ranging from collaborative filtering, video
surveillance, medical imaging, to dimensionality reduction and adaptive
filtering. Many modern high-dimensional data and interactions thereof can be
modeled as lying approximately in a low-dimensional subspace or manifold,
possibly with additional structures, and its proper exploitations lead to
significant reduction of costs in sensing, computation and storage. In recent
years, there is a plethora of progress in understanding how to exploit low-rank
structures using computationally efficient procedures in a provable manner,
including both convex and nonconvex approaches. On one side, convex relaxations
such as nuclear norm minimization often lead to statistically optimal
procedures for estimating low-rank matrices, where first-order methods are
developed to address the computational challenges; on the other side, there is
emerging evidence that properly designed nonconvex procedures, such as
projected gradient descent, often provide globally optimal solutions with a
much lower computational cost in many problems. This survey article will
provide a unified overview of these recent advances on low-rank matrix
estimation from incomplete measurements. Attention is paid to rigorous
characterization of the performance of these algorithms, and to problems where
the low-rank matrix have additional structural properties that require new
algorithmic designs and theoretical analysis.Comment: To appear in IEEE Signal Processing Magazin
Spectral Compressed Sensing via CANDECOMP/PARAFAC Decomposition of Incomplete Tensors
We consider the line spectral estimation problem which aims to recover a
mixture of complex sinusoids from a small number of randomly observed time
domain samples. Compressed sensing methods formulates line spectral estimation
as a sparse signal recovery problem by discretizing the continuous frequency
parameter space into a finite set of grid points. Discretization, however,
inevitably incurs errors and leads to deteriorated estimation performance. In
this paper, we propose a new method which leverages recent advances in tensor
decomposition. Specifically, we organize the observed data into a structured
tensor and cast line spectral estimation as a CANDECOMP/PARAFAC (CP)
decomposition problem with missing entries. The uniqueness of the CP
decomposition allows the frequency components to be super-resolved with
infinite precision. Simulation results show that the proposed method provides a
competitive estimate accuracy compared with existing state-of-the-art
algorithms
Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent
Low-rank matrix estimation is a canonical problem that finds numerous
applications in signal processing, machine learning and imaging science. A
popular approach in practice is to factorize the matrix into two compact
low-rank factors, and then optimize these factors directly via simple iterative
methods such as gradient descent and alternating minimization. Despite
nonconvexity, recent literatures have shown that these simple heuristics in
fact achieve linear convergence when initialized properly for a growing number
of problems of interest. However, upon closer examination, existing approaches
can still be computationally expensive especially for ill-conditioned matrices:
the convergence rate of gradient descent depends linearly on the condition
number of the low-rank matrix, while the per-iteration cost of alternating
minimization is often prohibitive for large matrices. The goal of this paper is
to set forth a competitive algorithmic approach dubbed Scaled Gradient Descent
(ScaledGD) which can be viewed as pre-conditioned or diagonally-scaled gradient
descent, where the pre-conditioners are adaptive and iteration-varying with a
minimal computational overhead. With tailored variants for low-rank matrix
sensing, robust principal component analysis and matrix completion, we
theoretically show that ScaledGD achieves the best of both worlds: it converges
linearly at a rate independent of the condition number of the low-rank matrix
similar as alternating minimization, while maintaining the low per-iteration
cost of gradient descent. Our analysis is also applicable to general loss
functions that are restricted strongly convex and smooth over low-rank
matrices. To the best of our knowledge, ScaledGD is the first algorithm that
provably has such properties over a wide range of low-rank matrix estimation
tasks
Data Driven Tight Frame for Compressed Sensing MRI Reconstruction via Off-the-Grid Regularization
Recently, the finite-rate-of-innovation (FRI) based continuous domain
regularization is emerging as an alternative to the conventional on-the-grid
sparse regularization for the compressed sensing (CS) due to its ability to
alleviate the basis mismatch between the true support of the shape in the
continuous domain and the discrete grid. In this paper, we propose a new
off-the-grid regularization for the CS-MRI reconstruction. Following the recent
works on two dimensional FRI, we assume that the discontinuities/edges of the
image are localized in the zero level set of a band-limited periodic function.
This assumption induces the linear dependencies among the Fourier samples of
the gradient of the image, which leads to a low rank two-fold Hankel matrix. We
further observe that the singular value decomposition of a low rank Hankel
matrix corresponds to an adaptive tight frame system which can represent the
image with sparse canonical coefficients. Based on this observation, we propose
a data driven tight frame based off-the-grid regularization model for the
CS-MRI reconstruction. To solve the nonconvex and nonsmooth model, a proximal
alternating minimization algorithm with a guaranteed global convergence is
adopted. Finally, the numerical experiments show that our proposed data driven
tight frame based approach outperforms the existing approaches
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
This paper studies noisy low-rank matrix completion: given partial and noisy
entries of a large low-rank matrix, the goal is to estimate the underlying
matrix faithfully and efficiently. Arguably one of the most popular paradigms
to tackle this problem is convex relaxation, which achieves remarkable efficacy
in practice. However, the theoretical support of this approach is still far
from optimal in the noisy setting, falling short of explaining its empirical
success.
We make progress towards demystifying the practical efficacy of convex
relaxation vis-\`a-vis random noise. When the rank and the condition number of
the unknown matrix are bounded by a constant, we demonstrate that the convex
programming approach achieves near-optimal estimation errors --- in terms of
the Euclidean loss, the entrywise loss, and the spectral norm loss --- for a
wide range of noise levels. All of this is enabled by bridging convex
relaxation with the nonconvex Burer-Monteiro approach, a seemingly distinct
algorithmic paradigm that is provably robust against noise. More specifically,
we show that an approximate critical point of the nonconvex formulation serves
as an extremely tight approximation of the convex solution, thus allowing us to
transfer the desired statistical guarantees of the nonconvex approach to its
convex counterpart
Quantized Spectral Compressed Sensing: Cramer-Rao Bounds and Recovery Algorithms
Efficient estimation of wideband spectrum is of great importance for
applications such as cognitive radio. Recently, sub-Nyquist sampling schemes
based on compressed sensing have been proposed to greatly reduce the sampling
rate. However, the important issue of quantization has not been fully
addressed, particularly for high-resolution spectrum and parameter estimation.
In this paper, we aim to recover spectrally-sparse signals and the
corresponding parameters, such as frequency and amplitudes, from heavy
quantizations of their noisy complex-valued random linear measurements, e.g.
only the quadrant information. We first characterize the Cramer-Rao bound under
Gaussian noise, which highlights the trade-off between sample complexity and
bit depth under different signal-to-noise ratios for a fixed budget of bits.
Next, we propose a new algorithm based on atomic norm soft thresholding for
signal recovery, which is equivalent to proximal mapping of properly designed
surrogate signals with respect to the atomic norm that motivates spectral
sparsity. The proposed algorithm can be applied to both the single measurement
vector case, as well as the multiple measurement vector case. It is shown that
under the Gaussian measurement model, the spectral signals can be reconstructed
accurately with high probability, as soon as the number of quantized
measurements exceeds the order of K log n, where K is the level of spectral
sparsity and is the signal dimension. Finally, numerical simulations are
provided to validate the proposed approaches
Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees
Optimization problems with rank constraints arise in many applications,
including matrix regression, structured PCA, matrix completion and matrix
decomposition problems. An attractive heuristic for solving such problems is to
factorize the low-rank matrix, and to run projected gradient descent on the
nonconvex factorized optimization problem. The goal of this problem is to
provide a general theoretical framework for understanding when such methods
work well, and to characterize the nature of the resulting fixed point. We
provide a simple set of conditions under which projected gradient descent, when
given a suitable initialization, converges geometrically to a statistically
useful solution. Our results are applicable even when the initial solution is
outside any region of local convexity, and even when the problem is globally
concave. Working in a non-asymptotic framework, we show that our conditions are
satisfied for a wide range of concrete models, including matrix regression,
structured PCA, matrix completion with real and quantized observations, matrix
decomposition, and graph clustering problems. Simulation results show excellent
agreement with the theoretical predictions