71 research outputs found
Quantization and Compressive Sensing
Quantization is an essential step in digitizing signals, and, therefore, an
indispensable component of any modern acquisition system. This book chapter
explores the interaction of quantization and compressive sensing and examines
practical quantization strategies for compressive acquisition systems.
Specifically, we first provide a brief overview of quantization and examine
fundamental performance bounds applicable to any quantization approach. Next,
we consider several forms of scalar quantizers, namely uniform, non-uniform,
and 1-bit. We provide performance bounds and fundamental analysis, as well as
practical quantizer designs and reconstruction algorithms that account for
quantization. Furthermore, we provide an overview of Sigma-Delta
() quantization in the compressed sensing context, and also
discuss implementation issues, recovery algorithms and performance bounds. As
we demonstrate, proper accounting for quantization and careful quantizer design
has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing
and Its Applications", 201
Compressed sensing performance bounds under Poisson noise
This paper describes performance bounds for compressed sensing (CS) where the
underlying sparse or compressible (sparsely approximable) signal is a vector of
nonnegative intensities whose measurements are corrupted by Poisson noise. In
this setting, standard CS techniques cannot be applied directly for several
reasons. First, the usual signal-independent and/or bounded noise models do not
apply to Poisson noise, which is non-additive and signal-dependent. Second, the
CS matrices typically considered are not feasible in real optical systems
because they do not adhere to important constraints, such as nonnegativity and
photon flux preservation. Third, the typical -- minimization
leads to overfitting in the high-intensity regions and oversmoothing in the
low-intensity areas. In this paper, we describe how a feasible positivity- and
flux-preserving sensing matrix can be constructed, and then analyze the
performance of a CS reconstruction approach for Poisson data that minimizes an
objective function consisting of a negative Poisson log likelihood term and a
penalty term which measures signal sparsity. We show that, as the overall
intensity of the underlying signal increases, an upper bound on the
reconstruction error decays at an appropriate rate (depending on the
compressibility of the signal), but that for a fixed signal intensity, the
signal-dependent part of the error bound actually grows with the number of
measurements or sensors. This surprising fact is both proved theoretically and
justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE
Transactions on Signal Processin
Modulated Unit-Norm Tight Frames for Compressed Sensing
In this paper, we propose a compressed sensing (CS) framework that consists
of three parts: a unit-norm tight frame (UTF), a random diagonal matrix and a
column-wise orthonormal matrix. We prove that this structure satisfies the
restricted isometry property (RIP) with high probability if the number of
measurements for -sparse signals of length
and if the column-wise orthonormal matrix is bounded. Some existing structured
sensing models can be studied under this framework, which then gives tighter
bounds on the required number of measurements to satisfy the RIP. More
importantly, we propose several structured sensing models by appealing to this
unified framework, such as a general sensing model with arbitrary/determinisic
subsamplers, a fast and efficient block compressed sensing scheme, and
structured sensing matrices with deterministic phase modulations, all of which
can lead to improvements on practical applications. In particular, one of the
constructions is applied to simplify the transceiver design of CS-based channel
estimation for orthogonal frequency division multiplexing (OFDM) systems.Comment: submitted to IEEE Transactions on Signal Processin
Compressive Sensing of Analog Signals Using Discrete Prolate Spheroidal Sequences
Compressive sensing (CS) has recently emerged as a framework for efficiently
capturing signals that are sparse or compressible in an appropriate basis.
While often motivated as an alternative to Nyquist-rate sampling, there remains
a gap between the discrete, finite-dimensional CS framework and the problem of
acquiring a continuous-time signal. In this paper, we attempt to bridge this
gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSS's), a
collection of functions that trace back to the seminal work by Slepian, Landau,
and Pollack on the effects of time-limiting and bandlimiting operations. DPSS's
form a highly efficient basis for sampled bandlimited functions; by modulating
and merging DPSS bases, we obtain a dictionary that offers high-quality sparse
approximations for most sampled multiband signals. This multiband modulated
DPSS dictionary can be readily incorporated into the CS framework. We provide
theoretical guarantees and practical insight into the use of this dictionary
for recovery of sampled multiband signals from compressive measurements
Sharp Time--Data Tradeoffs for Linear Inverse Problems
In this paper we characterize sharp time-data tradeoffs for optimization
problems used for solving linear inverse problems. We focus on the minimization
of a least-squares objective subject to a constraint defined as the sub-level
set of a penalty function. We present a unified convergence analysis of the
gradient projection algorithm applied to such problems. We sharply characterize
the convergence rate associated with a wide variety of random measurement
ensembles in terms of the number of measurements and structural complexity of
the signal with respect to the chosen penalty function. The results apply to
both convex and nonconvex constraints, demonstrating that a linear convergence
rate is attainable even though the least squares objective is not strongly
convex in these settings. When specialized to Gaussian measurements our results
show that such linear convergence occurs when the number of measurements is
merely 4 times the minimal number required to recover the desired signal at all
(a.k.a. the phase transition). We also achieve a slower but geometric rate of
convergence precisely above the phase transition point. Extensive numerical
results suggest that the derived rates exactly match the empirical performance
Sampling of graph signals via randomized local aggregations
Sampling of signals defined over the nodes of a graph is one of the crucial
problems in graph signal processing. While in classical signal processing
sampling is a well defined operation, when we consider a graph signal many new
challenges arise and defining an efficient sampling strategy is not
straightforward. Recently, several works have addressed this problem. The most
common techniques select a subset of nodes to reconstruct the entire signal.
However, such methods often require the knowledge of the signal support and the
computation of the sparsity basis before sampling. Instead, in this paper we
propose a new approach to this issue. We introduce a novel technique that
combines localized sampling with compressed sensing. We first choose a subset
of nodes and then, for each node of the subset, we compute random linear
combinations of signal coefficients localized at the node itself and its
neighborhood. The proposed method provides theoretical guarantees in terms of
reconstruction and stability to noise for any graph and any orthonormal basis,
even when the support is not known.Comment: IEEE Transactions on Signal and Information Processing over Networks,
201
Recommended from our members
Structured low complexity data mining
textDue to the rapidly increasing dimensionality of modern datasets many classical approximation algorithms have run into severe computational bottlenecks. This has often been referred to as the “curse of dimensionality.” To combat this, low complexity priors have been used as they enable us to design efficient approximation algorithms which are capable of scaling up to these modern datasets. Typically the reduction in computational complexity comes at the expense of accuracy. However, the tradeoffs have been relatively advantageous to the computational scientist. This is typically referred to as the “blessings of dimensionality.” Solving large underdetermined systems of linear equations has benefited greatly from the sparsity low complexity prior. A priori, solving a large underdetermined system of linear equations is severely ill-posed. However, using a relatively generic class of sampling matrices, assuming a sparsity prior can yield a well-posed linear system of equations. In particular, various greedy iterative approximation algorithms have been developed which can recover and accurately approximate the k-most significant atoms in our signal. For many engineering applications, the distribution of the top k atoms is not arbitrary and itself has some further structure. In the first half of the thesis we will be concerned with incorporating some a priori designed weights to allow for structured sparse approximation. We provide performance guarantees and numerically demonstrate how the appropriate use of weights can yield a simultaneous reduction in sample complexity and an improvement in approximation accuracy. In the second half of the thesis we will consider the collaborative filtering problem, specifically the task of matrix completion. The matrix completion problem is likewise severely ill-posed but with a low rank prior, the matrix completion problem with high probability admits a unique and robust solution via a cadre of convex optimization solvers. The drawback here is that the solvers enjoy strong theoretical guarantees only in the uniform sampling regime. Building upon recent work on non-uniform matrix completion, we propose a completely expert-free empirical procedure to design optimization parameters in the form of positive weights which allow for the recovery of arbitrarily sampled low rank matrices. We provide theoretical guarantees for these empirically learned weights and present numerical simulations which again show that encoding prior knowledge in the form of weights for optimization problems can again yield a simultaneous reduction in sample complexity and an improvement in approximation accuracy.Mathematic
- …