16 research outputs found
Geometry of the Welch Bounds
A geometric perspective involving Grammian and frame operators is used to
derive the entire family of Welch bounds. This perspective unifies a number of
observations that have been made regarding tightness of the bounds and their
connections to symmetric k-tensors, tight frames, homogeneous polynomials, and
t-designs. In particular. a connection has been drawn between sampling of
homogeneous polynomials and frames of symmetric k-tensors. It is also shown
that tightness of the bounds requires tight frames. The lack of tight frames in
symmetric k-tensors in many cases, however, leads to consideration of sets that
come as close as possible to attaining the bounds. The geometric derivation is
then extended in the setting of generalized or continuous frames. The Welch
bounds for finite sets and countably infinite sets become special cases of this
general setting.Comment: changes from previous version include: correction of typos,
additional references added, new Example 3.
Tight-frame-like Sparse Recovery Using Non-tight Sensing Matrices
The choice of the sensing matrix is crucial in compressed sensing (CS).
Gaussian sensing matrices possess the desirable restricted isometry property
(RIP), which is crucial for providing performance guarantees on sparse
recovery. Further, sensing matrices that constitute a Parseval tight frame
result in minimum mean-squared-error (MSE) reconstruction given oracle
knowledge of the support of the sparse vector. However, if the sensing matrix
is not tight, could one achieve the reconstruction performance assured by a
tight frame by suitably designing the reconstruction strategy? This is the key
question that we address in this paper. We develop a novel formulation that
relies on a generalized l2-norm-based data-fidelity loss that tightens the
sensing matrix, along with the standard l1 penalty for enforcing sparsity. The
optimization is performed using proximal gradient method, resulting in the
tight-frame iterative shrinkage thresholding algorithm (TF-ISTA). We show that
the objective convergence of TF-ISTA is linear akin to that of ISTA.
Incorporating Nesterovs momentum into TF-ISTA results in a faster variant,
namely, TF-FISTA, whose objective convergence is quadratic, akin to that of
FISTA. We provide performance guarantees on the l2-error for the proposed
formulation. Experimental results show that the proposed algorithms offer
superior sparse recovery performance and faster convergence. Proceeding
further, we develop the network variants of TF-ISTA and TF-FISTA, wherein a
convolutional neural network is used as the sparsifying operator. On the
application front, we consider compressed sensing image recovery (CSIR).
Experimental results on Set11, BSD68, Urban100, and DIV2K datasets show that
the proposed models outperform state-of-the-art sparse recovery methods, with
performance measured in terms of peak signal-to-noise ratio (PSNR) and
structural similarity index metric (SSIM).Comment: 33 pages, 12 figure
Introduction to frames
This survey gives an introduction to redundant signal representations called frames. These representations have recently emerged as yet another powerful tool in the signal processing toolbox and have become popular through use in numerous applications. Our aim is to familiarize a general audience with the area, while at the same time giving a snapshot of the current state-of-the-art
Adapted Compressed Sensing: A Game Worth Playing
Despite the universal nature of the compressed sensing mechanism, additional information on the class of sparse signals to acquire allows adjustments that yield substantial improvements. In facts, proper exploitation of these priors allows to significantly increase compression for a given reconstruction quality. Since one of the most promising scopes of application of compressed sensing is that of IoT devices subject to extremely low resource constraint, adaptation is especially interesting when it can cope with hardware-related constraint allowing low complexity implementations. We here review and compare many algorithmic adaptation policies that focus either on the encoding part or on the recovery part of compressed sensing. We also review other more hardware-oriented adaptation techniques that are actually able to make the difference when coming to real-world implementations. In all cases, adaptation proves to be a tool that should be mastered in practical applications to unleash the full potential of compressed sensing
Spectral Universality of Regularized Linear Regression with Nearly Deterministic Sensing Matrices
It has been observed that the performances of many high-dimensional
estimation problems are universal with respect to underlying sensing (or
design) matrices. Specifically, matrices with markedly different constructions
seem to achieve identical performance if they share the same spectral
distribution and have ``generic'' singular vectors. We prove this universality
phenomenon for the case of convex regularized least squares (RLS) estimators
under a linear regression model with additive Gaussian noise. Our main
contributions are two-fold: (1) We introduce a notion of universality classes
for sensing matrices, defined through a set of deterministic conditions that
fix the spectrum of the sensing matrix and precisely capture the previously
heuristic notion of generic singular vectors; (2) We show that for all sensing
matrices that lie in the same universality class, the dynamics of the proximal
gradient descent algorithm for solving the regression problem, as well as the
performance of RLS estimators themselves (under additional strong convexity
conditions) are asymptotically identical. In addition to including i.i.d.
Gaussian and rotational invariant matrices as special cases, our universality
class also contains highly structured, strongly correlated, or even (nearly)
deterministic matrices. Examples of the latter include randomly signed versions
of incoherent tight frames and randomly subsampled Hadamard transforms. As a
consequence of this universality principle, the asymptotic performance of
regularized linear regression on many structured matrices constructed with
limited randomness can be characterized by using the rotationally invariant
ensemble as an equivalent yet mathematically more tractable surrogate