1,425 research outputs found
Nonlinear Basis Pursuit
In compressive sensing, the basis pursuit algorithm aims to find the sparsest
solution to an underdetermined linear equation system. In this paper, we
generalize basis pursuit to finding the sparsest solution to higher order
nonlinear systems of equations, called nonlinear basis pursuit. In contrast to
the existing nonlinear compressive sensing methods, the new algorithm that
solves the nonlinear basis pursuit problem is convex and not greedy. The novel
algorithm enables the compressive sensing approach to be used for a broader
range of applications where there are nonlinear relationships between the
measurements and the unknowns
Recovery of binary sparse signals from compressed linear measurements via polynomial optimization
The recovery of signals with finite-valued components from few linear
measurements is a problem with widespread applications and interesting
mathematical characteristics. In the compressed sensing framework, tailored
methods have been recently proposed to deal with the case of finite-valued
sparse signals. In this work, we focus on binary sparse signals and we propose
a novel formulation, based on polynomial optimization. This approach is
analyzed and compared to the state-of-the-art binary compressed sensing
methods
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Stable image reconstruction using total variation minimization
This article presents near-optimal guarantees for accurate and robust image
recovery from under-sampled noisy measurements using total variation
minimization. In particular, we show that from O(slog(N)) nonadaptive linear
measurements, an image can be reconstructed to within the best s-term
approximation of its gradient up to a logarithmic factor, and this factor can
be removed by taking slightly more measurements. Along the way, we prove a
strengthened Sobolev inequality for functions lying in the null space of
suitably incoherent matrices.Comment: 25 page
From Sparse Signals to Sparse Residuals for Robust Sensing
One of the key challenges in sensor networks is the extraction of information
by fusing data from a multitude of distinct, but possibly unreliable sensors.
Recovering information from the maximum number of dependable sensors while
specifying the unreliable ones is critical for robust sensing. This sensing
task is formulated here as that of finding the maximum number of feasible
subsystems of linear equations, and proved to be NP-hard. Useful links are
established with compressive sampling, which aims at recovering vectors that
are sparse. In contrast, the signals here are not sparse, but give rise to
sparse residuals. Capitalizing on this form of sparsity, four sensing schemes
with complementary strengths are developed. The first scheme is a convex
relaxation of the original problem expressed as a second-order cone program
(SOCP). It is shown that when the involved sensing matrices are Gaussian and
the reliable measurements are sufficiently many, the SOCP can recover the
optimal solution with overwhelming probability. The second scheme is obtained
by replacing the initial objective function with a concave one. The third and
fourth schemes are tailored for noisy sensor data. The noisy case is cast as a
combinatorial problem that is subsequently surrogated by a (weighted) SOCP.
Interestingly, the derived cost functions fall into the framework of robust
multivariate linear regression, while an efficient block-coordinate descent
algorithm is developed for their minimization. The robust sensing capabilities
of all schemes are verified by simulated tests.Comment: Under review for publication in the IEEE Transactions on Signal
Processing (revised version
Structure-Based Bayesian Sparse Reconstruction
Sparse signal reconstruction algorithms have attracted research attention due
to their wide applications in various fields. In this paper, we present a
simple Bayesian approach that utilizes the sparsity constraint and a priori
statistical information (Gaussian or otherwise) to obtain near optimal
estimates. In addition, we make use of the rich structure of the sensing matrix
encountered in many signal processing applications to develop a fast sparse
recovery algorithm. The computational complexity of the proposed algorithm is
relatively low compared with the widely used convex relaxation methods as well
as greedy matching pursuit techniques, especially at a low sparsity rate.Comment: 29 pages, 15 figures, accepted in IEEE Transactions on Signal
Processing (July 2012
Simultaneously Structured Models with Application to Sparse and Low-rank Matrices
The topic of recovery of a structured model given a small number of linear
observations has been well-studied in recent years. Examples include recovering
sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and
low-rank matrices, among others. In various applications in signal processing
and machine learning, the model of interest is known to be structured in
several ways at the same time, for example, a matrix that is simultaneously
sparse and low-rank.
Often norms that promote each individual structure are known, and allow for
recovery using an order-wise optimal number of measurements (e.g.,
norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to
minimize a combination of such norms. We show that, surprisingly, if we use
multi-objective optimization with these norms, then we can do no better,
order-wise, than an algorithm that exploits only one of the present structures.
This result suggests that to fully exploit the multiple structures, we need an
entirely new convex relaxation, i.e. not one that is a function of the convex
relaxations used for each structure. We then specialize our results to the case
of sparse and low-rank matrices. We show that a nonconvex formulation of the
problem can recover the model from very few measurements, which is on the order
of the degrees of freedom of the matrix, whereas the convex problem obtained
from a combination of the and nuclear norms requires many more
measurements. This proves an order-wise gap between the performance of the
convex and nonconvex recovery problems in this case. Our framework applies to
arbitrary structure-inducing norms as well as to a wide range of measurement
ensembles. This allows us to give performance bounds for problems such as
sparse phase retrieval and low-rank tensor completion.Comment: 38 pages, 9 figure
- β¦