7,676 research outputs found
Covariance Estimation from Compressive Data Partitions using a Projected Gradient-based Algorithm
Covariance matrix estimation techniques require high acquisition costs that
challenge the sampling systems' storing and transmission capabilities. For this
reason, various acquisition approaches have been developed to simultaneously
sense and compress the relevant information of the signal using random
projections. However, estimating the covariance matrix from the random
projections is an ill-posed problem that requires further information about the
data, such as sparsity, low rank, or stationary behavior. Furthermore, this
approach fails using high compression ratios. Therefore, this paper proposes an
algorithm based on the projected gradient method to recover a low-rank or
Toeplitz approximation of the covariance matrix. The proposed algorithm divides
the data into subsets projected onto different subspaces, assuming that each
subset contains an approximation of the signal statistics, improving the
inverse problem's condition. The error induced by this assumption is
analytically derived along with the convergence guarantees of the proposed
method. Extensive simulations show that the proposed algorithm can effectively
recover the covariance matrix of hyperspectral images with high compression
ratios (8-15% approx) in noisy scenarios. Additionally, simulations and
theoretical results show that filtering the gradient reduces the estimator's
error recovering up to twice the number of eigenvectors.Comment: submitted to IEEE Transactions on Image Processin
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Subspace Methods for Joint Sparse Recovery
We propose robust and efficient algorithms for the joint sparse recovery
problem in compressed sensing, which simultaneously recover the supports of
jointly sparse signals from their multiple measurement vectors obtained through
a common sensing matrix. In a favorable situation, the unknown matrix, which
consists of the jointly sparse signals, has linearly independent nonzero rows.
In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally
proposed by Schmidt for the direction of arrival problem in sensor array
processing and later proposed and analyzed for joint sparse recovery by Feng
and Bresler, provides a guarantee with the minimum number of measurements. We
focus instead on the unfavorable but practically significant case of
rank-defect or ill-conditioning. This situation arises with limited number of
measurement vectors, or with highly correlated signal components. In this case
MUSIC fails, and in practice none of the existing methods can consistently
approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC),
which improves on MUSIC so that the support is reliably recovered under such
unfavorable conditions. Combined with subspace-based greedy algorithms also
proposed and analyzed in this paper, SA-MUSIC provides a computationally
efficient algorithm with a performance guarantee. The performance guarantees
are given in terms of a version of restricted isometry property. In particular,
we also present a non-asymptotic perturbation analysis of the signal subspace
estimation that has been missing in the previous study of MUSIC.Comment: submitted to IEEE transactions on Information Theory, revised versio
Revisiting the Nystrom Method for Improved Large-Scale Machine Learning
We reconsider randomized algorithms for the low-rank approximation of
symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel
matrices that arise in data analysis and machine learning applications. Our
main results consist of an empirical evaluation of the performance quality and
running time of sampling and projection methods on a diverse suite of SPSD
matrices. Our results highlight complementary aspects of sampling versus
projection methods; they characterize the effects of common data preprocessing
steps on the performance of these algorithms; and they point to important
differences between uniform sampling and nonuniform sampling methods based on
leverage scores. In addition, our empirical results illustrate that existing
theory is so weak that it does not provide even a qualitative guide to
practice. Thus, we complement our empirical results with a suite of worst-case
theoretical bounds for both random sampling and random projection methods.
These bounds are qualitatively superior to existing bounds---e.g. improved
additive-error bounds for spectral and Frobenius norm error and relative-error
bounds for trace norm error---and they point to future directions to make these
algorithms useful in even larger-scale machine learning applications.Comment: 60 pages, 15 color figures; updated proof of Frobenius norm bounds,
added comparison to projection-based low-rank approximations, and an analysis
of the power method applied to SPSD sketche
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Inexact Gradient Projection and Fast Data Driven Compressed Sensing
We study convergence of the iterative projected gradient (IPG) algorithm for
arbitrary (possibly nonconvex) sets and when both the gradient and projection
oracles are computed approximately. We consider different notions of
approximation of which we show that the Progressive Fixed Precision (PFP) and
the -optimal oracles can achieve the same accuracy as for the
exact IPG algorithm. We show that the former scheme is also able to maintain
the (linear) rate of convergence of the exact algorithm, under the same
embedding assumption. In contrast, the -approximate oracle
requires a stronger embedding condition, moderate compression ratios and it
typically slows down the convergence. We apply our results to accelerate
solving a class of data driven compressed sensing problems, where we replace
iterative exhaustive searches over large datasets by fast approximate nearest
neighbour search strategies based on the cover tree data structure. For
datasets with low intrinsic dimensions our proposed algorithm achieves a
complexity logarithmic in terms of the dataset population as opposed to the
linear complexity of a brute force search. By running several numerical
experiments we conclude similar observations as predicted by our theoretical
analysis
- …