1,771 research outputs found
Decentralized and Collaborative Subspace Pursuit: A Communication-Efficient Algorithm for Joint Sparsity Pattern Recovery with Sensor Networks
In this paper, we consider the problem of joint sparsity pattern recovery in
a distributed sensor network. The sparse multiple measurement vector signals
(MMVs) observed by all the nodes are assumed to have a common (but unknown)
sparsity pattern. To accurately recover the common sparsity pattern in a
decentralized manner with a low communication overhead of the network, we
develop an algorithm named decentralized and collaborative subspace pursuit
(DCSP). In DCSP, each node is required to perform three kinds of operations per
iteration: 1) estimate the local sparsity pattern by finding the subspace that
its measurement vector most probably lies in; 2) share its local sparsity
pattern estimate with one-hop neighboring nodes; and 3) update the final
sparsity pattern estimate by majority vote based fusion of all the local
sparsity pattern estimates obtained in its neighborhood. The convergence of
DCSP is proved and its communication overhead is quantitatively analyzed. We
also propose another decentralized algorithm named generalized DCSP (GDCSP) by
allowing more information exchange among neighboring nodes to further improve
the accuracy of sparsity pattern recovery at the cost of increased
communication overhead. Experimental results show that, 1) compared with
existing decentralized algorithms, DCSP provides much better accuracy of
sparsity pattern recovery at a comparable communication cost; and 2) the
accuracy of GDCSP is very close to that of centralized processing.Comment: 30 pages, 9 figure
Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey
In this survey paper, our goal is to discuss recent advances of compressive
sensing (CS) based solutions in wireless sensor networks (WSNs) including the
main ongoing/recent research efforts, challenges and research trends in this
area. In WSNs, CS based techniques are well motivated by not only the sparsity
prior observed in different forms but also by the requirement of efficient
in-network processing in terms of transmit power and communication bandwidth
even with nonsparse signals. In order to apply CS in a variety of WSN
applications efficiently, there are several factors to be considered beyond the
standard CS framework. We start the discussion with a brief introduction to the
theory of CS and then describe the motivational factors behind the potential
use of CS in WSN applications. Then, we identify three main areas along which
the standard CS framework is extended so that CS can be efficiently applied to
solve a variety of problems specific to WSNs. In particular, we emphasize on
the significance of extending the CS framework to (i). take communication
constraints into account while designing projection matrices and reconstruction
algorithms for signal reconstruction in centralized as well in decentralized
settings, (ii) solve a variety of inference problems such as detection,
classification and parameter estimation, with compressed data without signal
reconstruction and (iii) take practical communication aspects such as
measurement quantization, physical layer secrecy constraints, and imperfect
channel conditions into account. Finally, open research issues and challenges
are discussed in order to provide perspectives for future research directions
Joint Sparse Recovery With Semisupervised MUSIC
Discrete multiple signal classification (MUSIC) with its low computational
cost and mild condition requirement becomes a significant noniterative
algorithm for joint sparse recovery (JSR). However, it fails in rank defective
problem caused by coherent or limited amount of multiple measurement vectors
(MMVs). In this letter, we provide a novel sight to address this problem by
interpreting JSR as a binary classification problem with respect to atoms.
Meanwhile, MUSIC essentially constructs a supervised classifier based on the
labeled MMVs so that its performance will heavily depend on the quality and
quantity of these training samples. From this viewpoint, we develop a
semisupervised MUSIC (SS-MUSIC) in the spirit of machine learning, which
declares that the insufficient supervised information in the training samples
can be compensated from those unlabeled atoms. Instead of constructing a
classifier in a fully supervised manner, we iteratively refine a semisupervised
classifier by exploiting the labeled MMVs and some reliable unlabeled atoms
simultaneously. Through this way, the required conditions and iterations can be
greatly relaxed and reduced. Numerical experimental results demonstrate that
SS-MUSIC can achieve much better recovery performances than other MUSIC
extended algorithms as well as some typical greedy algorithms for JSR in terms
of iterations and recovery probability.Comment: Code is availabl
Local sparsity and recovery of fusion frames structured signals
The problem of recovering signals of high complexity from low quality sensing
devices is analyzed via a combination of tools from signal processing and
harmonic analysis. By using the rich structure offered by the recent
development in fusion frames, we introduce a compressed sensing framework in
which we split the dense information into sub-channel or local pieces and then
fuse the local estimations. Each piece of information is measured by
potentially low quality sensors, modeled by linear matrices and recovered via
compressed sensing -- when necessary. Finally, by a fusion process within the
fusion frames, we are able to recover accurately the original signal.
Using our new method, we show, and illustrate on simple numerical examples,
that it is possible, and sometimes necessary, to split a signal via local
projections and / or filtering for accurate, stable, and robust estimation. In
particular, we show that by increasing the size of the fusion frame, a certain
robustness to noise can also be achieved. While the computational complexity
remains relatively low, we achieve stronger recovery performance compared to
usual single-device compressed sensing systems.Comment: 17 figures, 42 page
Robust Recovery of Signals From a Structured Union of Subspaces
Traditional sampling theories consider the problem of reconstructing an
unknown signal from a series of samples. A prevalent assumption which often
guarantees recovery from the given measurements is that lies in a known
subspace. Recently, there has been growing interest in nonlinear but structured
signal models, in which lies in a union of subspaces. In this paper we
develop a general framework for robust and efficient recovery of such signals
from a given set of samples. More specifically, we treat the case in which
lies in a sum of subspaces, chosen from a larger set of possibilities.
The samples are modelled as inner products with an arbitrary set of sampling
functions. To derive an efficient and robust recovery algorithm, we show that
our problem can be formulated as that of recovering a block-sparse vector whose
non-zero elements appear in fixed blocks. We then propose a mixed
program for block sparse recovery. Our main result is an
equivalence condition under which the proposed convex algorithm is guaranteed
to recover the original signal. This result relies on the notion of block
restricted isometry property (RIP), which is a generalization of the standard
RIP used extensively in the context of compressed sensing. Based on RIP we also
prove stability of our approach in the presence of noise and modelling errors.
A special case of our framework is that of recovering multiple measurement
vectors (MMV) that share a joint sparsity pattern. Adapting our results to this
context leads to new MMV recovery methods as well as equivalence conditions
under which the entire set can be determined efficiently.Comment: 5 figures. 30 pages. This work has been submitted to the IEEE for
possible publicatio
Greedy Subspace Pursuit for Joint Sparse Recovery
In this paper, we address the sparse multiple measurement vector (MMV)
problem where the objective is to recover a set of sparse nonzero row vectors
or indices of a signal matrix from incomplete measurements. Ideally, regardless
of the number of columns in the signal matrix, the sparsity (k) plus one
measurements is sufficient for the uniform recovery of signal vectors for
almost all signals, i.e., excluding a set of Lebesgue measure zero. To approach
the "k+1" lower bound with computational efficiency even when the rank of
signal matrix is smaller than k, we propose a greedy algorithm called Two-stage
orthogonal Subspace Matching Pursuit (TSMP) whose theoretical results approach
the lower bound with less restriction than the Orthogonal Subspace Matching
Pursuit (OSMP) and Subspace-Augmented MUltiple SIgnal Classification (SA-MUSIC)
algorithms. We provide non-asymptotical performance guarantees of OSMP and TSMP
by covering both noiseless and noisy cases. Variants of restricted isometry
property and mutual coherence are used to improve the performance guarantees.
Numerical simulations demonstrate that the proposed scheme has low complexity
and outperforms most existing greedy methods. This shows that the minimum
number of measurements for the success of TSMP converges more rapidly to the
lower bound than the existing methods as the number of columns of the signal
matrix increases.Comment: 55 pages, 8 figures, to be submitted to IEEE Transactions on
Information theory, a shorter version was submitted to Proc. IEEE ISIT 201
Generalized Residual Ratio Thresholding
Simultaneous orthogonal matching pursuit (SOMP) and block OMP (BOMP) are two
widely used techniques for sparse support recovery in multiple measurement
vector (MMV) and block sparse (BS) models respectively. For optimal
performance, both SOMP and BOMP require \textit{a priori} knowledge of signal
sparsity or noise variance. However, sparsity and noise variance are
unavailable in most practical applications. This letter presents a novel
technique called generalized residual ratio thresholding (GRRT) for operating
SOMP and BOMP without the \textit{a priori} knowledge of signal sparsity and
noise variance and derive finite sample and finite signal to noise ratio (SNR)
guarantees for exact support recovery. Numerical simulations indicate that GRRT
performs similar to BOMP and SOMP with \textit{a priori} knowledge of signal
and noise statistics.Comment: 13 pages, 8 figure
Improving M-SBL for Joint Sparse Recovery using a Subspace Penalty
The multiple measurement vector problem (MMV) is a generalization of the
compressed sensing problem that addresses the recovery of a set of jointly
sparse signal vectors. One of the important contributions of this paper is to
reveal that the seemingly least related state-of-art MMV joint sparse recovery
algorithms - M-SBL (multiple sparse Bayesian learning) and subspace-based
hybrid greedy algorithms - have a very important link. More specifically, we
show that replacing the term in M-SBL by a rank proxy that
exploits the spark reduction property discovered in subspace-based joint sparse
recovery algorithms, provides significant improvements. In particular, if we
use the Schatten- quasi-norm as the corresponding rank proxy, the global
minimiser of the proposed algorithm becomes identical to the true solution as
. Furthermore, under the same regularity conditions, we show
that the convergence to a local minimiser is guaranteed using an alternating
minimization algorithm that has closed form expressions for each of the
minimization steps, which are convex. Numerical simulations under a variety of
scenarios in terms of SNR, and condition number of the signal amplitude matrix
demonstrate that the proposed algorithm consistently outperforms M-SBL and
other state-of-the art algorithms
Efficient iterative thresholding algorithms with functional feedbacks and convergence analysis
An accelerated class of adaptive scheme of iterative thresholding algorithms
is studied analytically and empirically. They are based on the feedback
mechanism of the null space tuning techniques (NST+HT+FB). The main
contribution of this article is the accelerated convergence analysis and proofs
with a variable/adaptive index selection and different feedback principles at
each iteration. These convergence analysis require no longer a priori sparsity
information of a signal. %key theory in this paper is the concept that the
number of indices selected at each iteration should be considered in order to
speed up the convergence. It is shown that uniform recovery of all -sparse
signals from given linear measurements can be achieved under reasonable
(preconditioned) restricted isometry conditions. Accelerated convergence rate
and improved convergence conditions are obtained by selecting an appropriate
size of the index support per iteration. The theoretical findings are
sufficiently demonstrated and confirmed by extensive numerical experiments. It
is also observed that the proposed algorithms have a clearly advantageous
balance of efficiency, adaptivity and accuracy compared with all other
state-of-the-art greedy iterative algorithms
Compressed Sensing for Wireless Communications : Useful Tips and Tricks
As a paradigm to recover the sparse signal from a small set of linear
measurements, compressed sensing (CS) has stimulated a great deal of interest
in recent years. In order to apply the CS techniques to wireless communication
systems, there are a number of things to know and also several issues to be
considered. However, it is not easy to come up with simple and easy answers to
the issues raised while carrying out research on CS. The main purpose of this
paper is to provide essential knowledge and useful tips that wireless
communication researchers need to know when designing CS-based wireless
systems. First, we present an overview of the CS technique, including basic
setup, sparse recovery algorithm, and performance guarantee. Then, we describe
three distinct subproblems of CS, viz., sparse estimation, support
identification, and sparse detection, with various wireless communication
applications. We also address main issues encountered in the design of CS-based
wireless communication systems. These include potentials and limitations of CS
techniques, useful tips that one should be aware of, subtle points that one
should pay attention to, and some prior knowledge to achieve better
performance. Our hope is that this article will be a useful guide for wireless
communication researchers and even non-experts to grasp the gist of CS
techniques
- β¦