13,082 research outputs found
Modulated Unit-Norm Tight Frames for Compressed Sensing
In this paper, we propose a compressed sensing (CS) framework that consists
of three parts: a unit-norm tight frame (UTF), a random diagonal matrix and a
column-wise orthonormal matrix. We prove that this structure satisfies the
restricted isometry property (RIP) with high probability if the number of
measurements for -sparse signals of length
and if the column-wise orthonormal matrix is bounded. Some existing structured
sensing models can be studied under this framework, which then gives tighter
bounds on the required number of measurements to satisfy the RIP. More
importantly, we propose several structured sensing models by appealing to this
unified framework, such as a general sensing model with arbitrary/determinisic
subsamplers, a fast and efficient block compressed sensing scheme, and
structured sensing matrices with deterministic phase modulations, all of which
can lead to improvements on practical applications. In particular, one of the
constructions is applied to simplify the transceiver design of CS-based channel
estimation for orthogonal frequency division multiplexing (OFDM) systems.Comment: submitted to IEEE Transactions on Signal Processin
The generalized Lasso with non-linear observations
We study the problem of signal estimation from non-linear observations when
the signal belongs to a low-dimensional set buried in a high-dimensional space.
A rough heuristic often used in practice postulates that non-linear
observations may be treated as noisy linear observations, and thus the signal
may be estimated using the generalized Lasso. This is appealing because of the
abundance of efficient, specialized solvers for this program. Just as noise may
be diminished by projecting onto the lower dimensional space, the error from
modeling non-linear observations with linear observations will be greatly
reduced when using the signal structure in the reconstruction. We allow general
signal structure, only assuming that the signal belongs to some set K in R^n.
We consider the single-index model of non-linearity. Our theory allows the
non-linearity to be discontinuous, not one-to-one and even unknown. We assume a
random Gaussian model for the measurement matrix, but allow the rows to have an
unknown covariance matrix. As special cases of our results, we recover
near-optimal theory for noisy linear observations, and also give the first
theoretical accuracy guarantee for 1-bit compressed sensing with unknown
covariance matrix of the measurement vectors.Comment: 21 page
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
Compressed Sensing with Coherent and Redundant Dictionaries
This article presents novel results concerning the recovery of signals from
undersampled data in the common situation where such signals are not sparse in
an orthonormal basis or incoherent dictionary, but in a truly redundant
dictionary. This work thus bridges a gap in the literature and shows not only
that compressed sensing is viable in this context, but also that accurate
recovery is possible via an L1-analysis optimization problem. We introduce a
condition on the measurement/sensing matrix, which is a natural generalization
of the now well-known restricted isometry property, and which guarantees
accurate recovery of signals that are nearly sparse in (possibly) highly
overcomplete and coherent dictionaries. This condition imposes no incoherence
restriction on the dictionary and our results may be the first of this kind. We
discuss practical examples and the implications of our results on those
applications, and complement our study by demonstrating the potential of
L1-analysis for such problems
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
In this paper, we study the problem of compressed sensing using binary
measurement matrices and -norm minimization (basis pursuit) as the
recovery algorithm. We derive new upper and lower bounds on the number of
measurements to achieve robust sparse recovery with binary matrices. We
establish sufficient conditions for a column-regular binary matrix to satisfy
the robust null space property (RNSP) and show that the associated sufficient
conditions % sparsity bounds for robust sparse recovery obtained using the RNSP
are better by a factor of compared to the
sufficient conditions obtained using the restricted isometry property (RIP).
Next we derive universal \textit{lower} bounds on the number of measurements
that any binary matrix needs to have in order to satisfy the weaker sufficient
condition based on the RNSP and show that bipartite graphs of girth six are
optimal. Then we display two classes of binary matrices, namely parity check
matrices of array codes and Euler squares, which have girth six and are nearly
optimal in the sense of almost satisfying the lower bound. In principle,
randomly generated Gaussian measurement matrices are "order-optimal". So we
compare the phase transition behavior of the basis pursuit formulation using
binary array codes and Gaussian matrices and show that (i) there is essentially
no difference between the phase transition boundaries in the two cases and (ii)
the CPU time of basis pursuit with binary matrices is hundreds of times faster
than with Gaussian matrices and the storage requirements are less. Therefore it
is suggested that binary matrices are a viable alternative to Gaussian matrices
for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table
- …