4,080 research outputs found
Greed is good: algorithmic results for sparse approximation
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms
Improved analysis of the subsampled randomized Hadamard transform
This paper presents an improved analysis of a structured dimension-reduction
map called the subsampled randomized Hadamard transform. This argument
demonstrates that the map preserves the Euclidean geometry of an entire
subspace of vectors. The new proof is much simpler than previous approaches,
and it offers---for the first time---optimal constants in the estimate on the
number of dimensions required for the embedding.Comment: 8 pages. To appear, Advances in Adaptive Data Analysis, special issue
"Sparse Representation of Data and Images." v2--v4 include minor correction
Freedman’s Inequality for Matrix Martingales
Freedman's inequality is a martingale counterpart to Bernstein's inequality. This result shows that the large-deviation behavior of a martingale is controlled by the predictable quadratic variation and a uniform upper bound for the martingale difference sequence. Oliveira has recently established a natural extension of Freedman's inequality that provides tail bounds for the maximum singular value of a matrix-valued martingale. This note describes a different proof of the matrix Freedman inequality that depends on a deep theorem of Lieb from matrix analysis. This argument delivers sharp constants in the matrix Freedman inequality, and it also yields tail bounds for other types of matrix martingales. The new techniques are adapted from recent work by the present author
Random Filters for Compressive Sampling
This paper discusses random filtering, a recently proposed method for directly acquiring a compressed version of a digital signal. The technique is based on convolution of the signal with a fixed FIR filter having random taps, followed by downsampling. Experiments show that random filtering is effective at acquiring sparse and compressible signals. This process has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters
The Sparsity Gap: Uncertainty Principles Proportional to Dimension
In an incoherent dictionary, most signals that admit a sparse representation
admit a unique sparse representation. In other words, there is no way to
express the signal without using strictly more atoms. This work demonstrates
that sparse signals typically enjoy a higher privilege: each nonoptimal
representation of the signal requires far more atoms than the sparsest
representation-unless it contains many of the same atoms as the sparsest
representation. One impact of this finding is to confer a certain degree of
legitimacy on the particular atoms that appear in a sparse representation. This
result can also be viewed as an uncertainty principle for random sparse signals
over an incoherent dictionary.Comment: 6 pages. To appear in the Proceedings of the 44th Ann. IEEE Conf. on
Information Sciences and System
User-friendly Tail Bounds for Matrix Martingales
This report presents probability inequalities for sums of adapted sequences of random,
self-adjoint matrices. The results frame simple, easily verifiable hypotheses on the summands, and
they yield strong conclusions about the large-deviation behavior of the maximum eigenvalue of the
sum. The methods also specialize to sums of independent random matrices
Second-Order Matrix Concentration Inequalities
Matrix concentration inequalities give bounds for the spectral-norm deviation
of a random matrix from its expected value. These results have a weak
dimensional dependence that is sometimes, but not always, necessary. This paper
identifies one of the sources of the dimensional term and exploits this insight
to develop sharper matrix concentration inequalities. In particular, this
analysis delivers two refinements of the matrix Khintchine inequality that use
information beyond the matrix variance to reduce or eliminate the dimensional
dependence.Comment: 27 pages. Revision corrects technical errors in several place
The random paving property for uniformly bounded matrices
This note presents a new proof of an important result due to Bourgain and
Tzafriri that provides a partial solution to the Kadison--Singer problem. The
result shows that every unit-norm matrix whose entries are relatively small in
comparison with its dimension can be paved by a partition of constant size.
That is, the coordinates can be partitioned into a constant number of blocks so
that the restriction of the matrix to each block of coordinates has norm less
than one half. The original proof of Bourgain and Tzafriri involves a long,
delicate calculation. The new proof relies on the systematic use of
symmetrization and (noncommutative) Khintchine inequalities to estimate the
norms of some random matrices.Comment: 12 pages; v2 with cosmetic changes; v3 with corrections to Prop. 4;
v4 with minor changes to text; v5 with correction to discussion of
noncommutative Khintchine inequality; v6 with slight improvement to main
theore
- …