808 research outputs found
Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property
Compressed Sensing aims to capture attributes of -sparse signals using
very few measurements. In the standard Compressed Sensing paradigm, the
\m\times \n measurement matrix \A is required to act as a near isometry on
the set of all -sparse signals (Restricted Isometry Property or RIP).
Although it is known that certain probabilistic processes generate \m \times
\n matrices that satisfy RIP with high probability, there is no practical
algorithm for verifying whether a given sensing matrix \A has this property,
crucial for the feasibility of the standard recovery algorithms. In contrast
this paper provides simple criteria that guarantee that a deterministic sensing
matrix satisfying these criteria acts as a near isometry on an overwhelming
majority of -sparse signals; in particular, most such signals have a unique
representation in the measurement domain. Probability still plays a critical
role, but it enters the signal model rather than the construction of the
sensing matrix. We require the columns of the sensing matrix to form a group
under pointwise multiplication. The construction allows recovery methods for
which the expected performance is sub-linear in \n, and only quadratic in
\m; the focus on expected performance is more typical of mainstream signal
processing than the worst-case analysis that prevails in standard Compressed
Sensing. Our framework encompasses many families of deterministic sensing
matrices, including those formed from discrete chirps, Delsarte-Goethals codes,
and extended BCH codes.Comment: 16 Pages, 2 figures, to appear in IEEE Journal of Selected Topics in
Signal Processing, the special issue on Compressed Sensin
Empirical recovery performance of fourier-based deterministic compressed sensing
Compressed sensing is a novel technique where one can recover sparse signals from the undersampled measurements. Mathematically, measuring an N-dimensional signal..
Compressed Neighbour Discovery using Sparse Kerdock Matrices
We study the network-wide neighbour discovery problem in wireless networks in
which each node in a network must discovery the network interface addresses
(NIAs) of its neighbours. We work within the rapid on-off division duplex
framework proposed by Guo and Zhang (2010) in which all nodes are assigned
different on-off signatures which allow them listen to the transmissions of
neighbouring nodes during their off slots, leading to a compressed sensing
problem at each node with a collapsed codebook determined by a given node's
transmission signature. We propose sparse Kerdock matrices as codebooks for the
neighbour discovery problem. These matrices share the same row space as certain
Delsarte-Goethals frames based upon Reed Muller codes, whilst at the same time
being extremely sparse. We present numerical experiments using two different
compressed sensing recovery algorithms, One Step Thresholding (OST) and
Normalised Iterative Hard Thresholding (NIHT). For both algorithms, a higher
proportion of neighbours are successfully identified using sparse Kerdock
matrices compared to codebooks based on Reed Muller codes with random erasures
as proposed by Zhang and Guo (2011). We argue that the improvement is due to
the better interference cancellation properties of sparse Kerdock matrices when
collapsed according to a given node's transmission signature. We show by
explicit calculation that the coherence of the collapsed codebooks resulting
from sparse Kerdock matrices remains near-optimal
- …