103 research outputs found
Power-Constrained Sparse Gaussian Linear Dimensionality Reduction over Noisy Channels
In this paper, we investigate power-constrained sensing matrix design in a
sparse Gaussian linear dimensionality reduction framework. Our study is carried
out in a single--terminal setup as well as in a multi--terminal setup
consisting of orthogonal or coherent multiple access channels (MAC). We adopt
the mean square error (MSE) performance criterion for sparse source
reconstruction in a system where source-to-sensor channel(s) and
sensor-to-decoder communication channel(s) are noisy. Our proposed sensing
matrix design procedure relies upon minimizing a lower-bound on the MSE in
single-- and multiple--terminal setups. We propose a three-stage sensing matrix
optimization scheme that combines semi-definite relaxation (SDR) programming, a
low-rank approximation problem and power-rescaling. Under certain conditions,
we derive closed-form solutions to the proposed optimization procedure. Through
numerical experiments, by applying practical sparse reconstruction algorithms,
we show the superiority of the proposed scheme by comparing it with other
relevant methods. This performance improvement is achieved at the price of
higher computational complexity. Hence, in order to address the complexity
burden, we present an equivalent stochastic optimization method to the problem
of interest that can be solved approximately, while still providing a superior
performance over the popular methods.Comment: Accepted for publication in IEEE Transactions on Signal Processing
(16 pages
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Blind Multilinear Identification
We discuss a technique that allows blind recovery of signals or blind
identification of mixtures in instances where such recovery or identification
were previously thought to be impossible: (i) closely located or highly
correlated sources in antenna array processing, (ii) highly correlated
spreading codes in CDMA radio communication, (iii) nearly dependent spectra in
fluorescent spectroscopy. This has important implications --- in the case of
antenna array processing, it allows for joint localization and extraction of
multiple sources from the measurement of a noisy mixture recorded on multiple
sensors in an entirely deterministic manner. In the case of CDMA, it allows the
possibility of having a number of users larger than the spreading gain. In the
case of fluorescent spectroscopy, it allows for detection of nearly identical
chemical constituents. The proposed technique involves the solution of a
bounded coherence low-rank multilinear approximation problem. We show that
bounded coherence allows us to establish existence and uniqueness of the
recovered solution. We will provide some statistical motivation for the
approximation problem and discuss greedy approximation bounds. To provide the
theoretical underpinnings for this technique, we develop a corresponding theory
of sparse separable decompositions of functions, including notions of rank and
nuclear norm that specialize to the usual ones for matrices and operators but
apply to also hypermatrices and tensors.Comment: 20 pages, to appear in IEEE Transactions on Information Theor
Greedy-Like Algorithms for the Cosparse Analysis Model
The cosparse analysis model has been introduced recently as an interesting
alternative to the standard sparse synthesis approach. A prominent question
brought up by this new construction is the analysis pursuit problem -- the need
to find a signal belonging to this model, given a set of corrupted measurements
of it. Several pursuit methods have already been proposed based on
relaxation and a greedy approach. In this work we pursue this question further,
and propose a new family of pursuit algorithms for the cosparse analysis model,
mimicking the greedy-like methods -- compressive sampling matching pursuit
(CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard
thresholding pursuit (HTP). Assuming the availability of a near optimal
projection scheme that finds the nearest cosparse subspace to any vector, we
provide performance guarantees for these algorithms. Our theoretical study
relies on a restricted isometry property adapted to the context of the cosparse
analysis model. We explore empirically the performance of these algorithms by
adopting a plain thresholding projection, demonstrating their good performance
Tolerant Compressed Sensing With Partially Coherent Sensing Matrices
We consider compressed sensing (CS) using partially coherent sensing matrices. Most of CS theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. In this paper, we focus on d-tolerant recovery, in which support set reconstructions are considered accurate when their locations match the true locations within d indices. Despite the abundance of work utilizing incoherent sensing matrices, for d-tolerant recovery we suggest that coherence is actually beneficial. This is especially true for situations with only a few and very noisy measurements as we demonstrate via numerical simulations. As a first step towards the theory of tolerant coherent sensing we introduce the notions of d-coherence and d-tolerant recovery. We then provide some theoretical arguments for a greedy algorithm applicable to d-tolerant recovery of signals with sufficiently spread support
- …