86 research outputs found
An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]
This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality.
Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications
On Grid Compressive Sampling for Spherical Field Measurements in Acoustics
We derive a compressive sampling method for acoustic field reconstruction
using field measurements on a predefined spherical grid that has theoretically
guaranteed relations between signal sparsity, measurement number, and
reconstruction accuracy. This method can be used to reconstruct band-limited
spherical harmonic or Wigner -function series (spherical harmonic series are
a special case) with sparse coefficients. Contrasting typical compressive
sampling methods for Wigner -function series that use arbitrary random
measurements, the new method samples randomly on an equiangular grid, a
practical and commonly used sampling pattern. Using its periodic extension, we
transform the reconstruction of a Wigner -function series into a
multi-dimensional Fourier domain reconstruction problem. We establish that this
transformation has a bounded effect on sparsity level and provide numerical
studies of this effect. We also compare the reconstruction performance of the
new approach to classical Nyquist sampling and existing compressive sampling
methods. In our tests, the new compressive sampling approach performs
comparably to other guaranteed compressive sampling approaches and needs a
fraction of the measurements dictated by the Nyquist sampling theorem.
Moreover, using one-third of the measurements or less, the new compressive
sampling method can provide over 20 dB better denoising capability than
oversampling with classical Fourier theory.Comment: 19 pages 14 figure
Compressive Sensing with Wigner -functions on Subsets of the Sphere
In this paper, we prove a compressive sensing guarantee for restricted
measurement domains on the rotation group, . We do so by first
defining Slepian functions on a measurement sub-domain of the rotation
group . Then, we transform the inverse problem from the
measurement basis, the bounded orthonormal system of band-limited Wigner
-functions on , to the Slepian functions in a way that
limits increases to signal sparsity. Contrasting methods using Wigner
-functions that require measurements on all of , we show
that the orthogonality structure of the Slepian functions only requires
measurements on the sub-domain , which is select-able. Due to the
particulars of this approach and the inherent presence of Slepian functions
with low concentrations on , our approach gives the highest accuracy when
the signal under study is well concentrated on . We provide numerical
examples of our method in comparison with other classical and compressive
sensing approaches. In terms of reconstruction quality, we find that our method
outperforms the other compressive sensing approaches we test and is at least as
good as classical approaches but with a significant reduction in the number of
measurements
Enhancing Sparsity by Reweighted ℓ(1) Minimization
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing
Sparsity and Incoherence in Compressive Sampling
We consider the problem of reconstructing a sparse signal from a
limited number of linear measurements. Given randomly selected samples of
, where is an orthonormal matrix, we show that minimization
recovers exactly when the number of measurements exceeds where is the number of
nonzero components in , and is the largest entry in properly
normalized: . The smaller ,
the fewer samples needed.
The result holds for ``most'' sparse signals supported on a fixed (but
arbitrary) set . Given , if the sign of for each nonzero entry on
and the observed values of are drawn at random, the signal is
recovered with overwhelming probability. Moreover, there is a sense in which
this is nearly optimal since any method succeeding with the same probability
would require just about this many samples
A First Analysis of the Stability of Takens' Embedding
Takens' Embedding Theorem asserts that when the states of a hidden dynamical
system are confined to a low-dimensional attractor, complete information about
the states can be preserved in the observed time-series output through the
delay coordinate map. However, the conditions for the theorem to hold ignore
the effects of noise and time-series analysis in practice requires a careful
empirical determination of the sampling time and number of delays resulting in
a number of delay coordinates larger than the minimum prescribed by Takens'
theorem. In this paper, we use tools and ideas in Compressed Sensing to provide
a first theoretical justification for the choice of the number of delays in
noisy conditions. In particular, we show that under certain conditions on the
dynamical system, measurement function, number of delays and sampling time, the
delay-coordinate map can be a stable embedding of the dynamical system's
attractor
The restricted isometry property for random block diagonal matrices
In Compressive Sensing, the Restricted Isometry Property (RIP) ensures that
robust recovery of sparse vectors is possible from noisy, undersampled
measurements via computationally tractable algorithms. It is by now well-known
that Gaussian (or, more generally, sub-Gaussian) random matrices satisfy the
RIP under certain conditions on the number of measurements. Their use can be
limited in practice, however, due to storage limitations, computational
considerations, or the mismatch of such matrices with certain measurement
architectures. These issues have recently motivated considerable effort towards
studying the RIP for structured random matrices. In this paper, we study the
RIP for block diagonal measurement matrices where each block on the main
diagonal is itself a sub-Gaussian random matrix. Our main result states that
such matrices can indeed satisfy the RIP but that the requisite number of
measurements depends on certain properties of the basis in which the signals
are sparse. In the best case, these matrices perform nearly as well as dense
Gaussian random matrices, despite having many fewer nonzero entries
Joint Elastic Side-Scattering Lidar and Raman Lidar Measurements of Aerosol Optical Properties in South East Colorado
We describe an experiment, located in south-east Colorado, USA, that measured
aerosol optical depth profiles using two Lidar techniques. Two independent
detectors measured scattered light from a vertical UV laser beam. One detector,
located at the laser site, measured light via the inelastic Raman
backscattering process. This is a common method used in atmospheric science for
measuring aerosol optical depth profiles. The other detector, located
approximately 40km distant, viewed the laser beam from the side. This detector
featured a 3.5m2 mirror and measured elastically scattered light in a bistatic
Lidar configuration following the method used at the Pierre Auger cosmic ray
observatory. The goal of this experiment was to assess and improve methods to
measure atmospheric clarity, specifically aerosol optical depth profiles, for
cosmic ray UV fluorescence detectors that use the atmosphere as a giant
calorimeter. The experiment collected data from September 2010 to July 2011
under varying conditions of aerosol loading. We describe the instruments and
techniques and compare the aerosol optical depth profiles measured by the Raman
and bistatic Lidar detectors.Comment: 34 pages, 16 figure
- …